From 24713c07634b857ae7b71ea02113b27137a7d84f Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 25 Jan 2018 16:12:59 +0100 Subject: [PATCH 01/31] WIP on docs improvements added: * command reference * logging configuration * path config * directory layout --- docs/configuring.asciidoc | 61 +- .../command-reference.asciidoc | 581 ++++++++++++++++++ docs/copied-from-beats/loggingconfig.asciidoc | 184 ++++++ .../shared-directory-layout.asciidoc | 99 +++ .../shared-env-vars.asciidoc | 110 ++++ .../shared-path-config.asciidoc | 99 +++ docs/high-availability.asciidoc | 1 - docs/index.asciidoc | 6 +- docs/installing.asciidoc | 10 +- docs/overview.asciidoc | 2 +- docs/security.asciidoc | 1 - docs/setting-up-and-running.asciidoc | 64 ++ 12 files changed, 1160 insertions(+), 58 deletions(-) create mode 100644 docs/copied-from-beats/command-reference.asciidoc create mode 100644 docs/copied-from-beats/loggingconfig.asciidoc create mode 100644 docs/copied-from-beats/shared-directory-layout.asciidoc create mode 100644 docs/copied-from-beats/shared-env-vars.asciidoc create mode 100644 docs/copied-from-beats/shared-path-config.asciidoc create mode 100644 docs/setting-up-and-running.asciidoc diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index 04a2470615f..d91b5b7dd1e 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -1,56 +1,21 @@ -[[configuring]] -== Configuring and running APM Server +[[configure]] += Configure APM Server -In a production environment, -you would put APM Server on its own machines, -similar to how you run Elasticsearch. -You _can_ run it on the same machines as Elasticsearch, -but this is not recommended, -as the processes will be competing for resources. -To start APM Server, run: +[partintro] +-- +The following explains how to configure APM Server: -[source,bash] ----------------------------------- -./apm-server -e ----------------------------------- +* <> +* <> +* <> +-- -You should see APM Server start up. -It will try to connect to Elasticsearch on localhost port 9200 and expose an API to agents on port 8200. -You can change the defaults by supplying a different address on the command line: -[source,bash] ----------------------------------- -./apm-server -e -E output.elasticsearch.hosts=ElasticsearchAddress:9200 -E apm-server.host=localhost:8200 ----------------------------------- -Or you can update the `apm-server.yml` configuration file to change the defaults. +include::./copied-from-beats/loggingconfig.asciidoc[] -[source,yaml] ----------------------------------- -apm-server: - host: localhost:8200 +:standalone: +include::./copied-from-beats/shared-env-vars.asciidoc[] -output: - elasticsearch: - hosts: ElasticsearchAddress:9200 ----------------------------------- - - -NOTE: If you are using an X-Pack secured version of Elastic Stack, -you need to specify credentials in the config file before you run the commands that set up and start APM Server. -For example: - -[source,yaml] ----- -output.elasticsearch: - hosts: ["ElasticsearchAddress:9200"] - username: "elastic" - password: "elastic" ----- - -See https://github.com/elastic/apm-server/blob/{doc-branch}/apm-server.reference.yml[`apm-server.reference.yml`] for more configuration options. - -include::./security.asciidoc[] - -include::./high-availability.asciidoc[] +include::./copied-from-beats/shared-path-config.asciidoc[] diff --git a/docs/copied-from-beats/command-reference.asciidoc b/docs/copied-from-beats/command-reference.asciidoc new file mode 100644 index 00000000000..248219ca91c --- /dev/null +++ b/docs/copied-from-beats/command-reference.asciidoc @@ -0,0 +1,581 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/command-reference.asciidoc[] +////////////////////////////////////////////////////////////////////////// + + +// These attributes are used to resolve short descriptions + +:global-flags: Also see <>. + +:export-command-short-desc: Exports the configuration or index template to stdout +:help-command-short-desc: Shows help for any command +:modules-command-short-desc: Manages configured modules +:run-command-short-desc: Runs {beatname_uc}. This command is used by default if you start {beatname_uc} without specifying a command +:setup-command-short-desc: Sets up the initial environment, including the index template, Kibana dashboards (when available), and machine learning jobs (when available) +:test-command-short-desc: Tests the configuration +:version-command-short-desc: Shows information about the current version + + +[[command-line-options]] +=== {beatname_uc} commands + +{beatname_uc} provides a command-line interface for running the Beat and +performing common tasks, like testing configuration files and loading +dashboards. The command-line also supports <> +for controlling global behaviors. + +ifeval::["{beatname_lc}"!="winlogbeat"] + +[TIP] +========================= +Use `sudo` to run the following commands if: + +* the config file is owned by `root`, or +* {beatname_uc} is configured to capture data that requires `root` access + +========================= + +endif::[] + +[horizontal] +<>:: +{export-command-short-desc}. + +<>:: +{help-command-short-desc}. + +ifeval::[("{beatname_lc}"=="filebeat") or ("{beatname_lc}"=="metricbeat")] + +<>:: +{modules-command-short-desc}. + +endif::[] + +<>:: +{run-command-short-desc}. + +<>:: +{setup-command-short-desc}. + +<>:: +{test-command-short-desc}. + +<>:: +{version-command-short-desc}. + +Also see <>. + +[[export-command]] +==== `export` command + +{export-command-short-desc}. You can use this +command to quickly view your configuration or the contents of the index +template. + +*SYNOPSIS* + +["source","sh",subs="attributes"] +---- +{beatname_lc} export SUBCOMMAND [FLAGS] +---- + + +*SUBCOMMANDS* + +*`config`*:: +Exports the current configuration to stdout. If you use the `-c` flag, this +command exports the configuration that's defined in the specified file. + +[[template-subcommand]] +*`template`*:: +Exports the index template to stdout. You can specify the `--es.version` and +`--index` flags to further define what gets exported. + +*FLAGS* + +*`--es.version VERSION`*:: +When specified along with <>, exports an index +template that is compatible with the specified version. + +*`-h, --help`*:: +Shows help for the `export` command. + +*`--index BASE_NAME`*:: +When specified along with <>, sets the base name +to use for the index template. If this flag is not specified, the default base +name is +{beatname_lc}+. + +{global-flags} + +*EXAMPLES* + +["source","sh",subs="attributes"] +----- +{beatname_lc} export config +{beatname_lc} export template --es.version {stack-version} --index myindexname +----- + + +[[help-command]] +==== `help` command + +{help-command-short-desc}. If no command is specified, shows help for the +`run` command. + +*SYNOPSIS* + +["source","sh",subs="attributes"] +---- +{beatname_lc} help COMMAND_NAME [FLAGS] +---- + + +*`COMMAND_NAME`*:: +Specifies the name of the command to show help for. + +*FLAGS* + +*`-h, --help`*:: Shows help for the `help` command. + +{global-flags} + +*EXAMPLE* + +["source","sh",subs="attributes"] +----- +{beatname_lc} help export +----- + +ifeval::[("{beatname_lc}"=="filebeat") or ("{beatname_lc}"=="metricbeat")] + +[[modules-command]] +==== `modules` command + +{modules-command-short-desc}. You can use this command to enable and disable +specific module configurations defined in the `modules.d` directory. The +changes you make with this command are persisted and used for subsequent +runs of {beatname_uc}. + +To see which modules are enabled and disabled, run the `list` subcommand. + +*SYNOPSIS* + +["source","sh",subs="attributes"] +---- +{beatname_lc} modules SUBCOMMAND [FLAGS] +---- + + +*SUBCOMMANDS* + +*`disable MODULE_LIST`*:: +Disables the modules specified in the space-separated list. + +*`enable MODULE_LIST`*:: +Enables the modules specified in the space-separated list. + +*`list`*:: +Lists the modules that are currently enabled and disabled. + + +*FLAGS* + +*`-h, --help`*:: +Shows help for the `export` command. + + +{global-flags} + +*EXAMPLES* + +ifeval::["{beatname_lc}"=="filebeat"] + +["source","sh",subs="attributes"] +----- +{beatname_lc} modules list +{beatname_lc} modules enable apache2 auditd mysql +----- + +endif::[] + +ifeval::["{beatname_lc}"=="metricbeat"] + +["source","sh",subs="attributes"] +----- +{beatname_lc} modules list +{beatname_lc} modules enable apache nginx system +----- + + +endif::[] + +endif::[] + + +[[run-command]] +==== `run` command + +{run-command-short-desc}. + +*SYNOPSIS* + +["source","sh",subs="attributes"] +----- +{beatname_lc} run [FLAGS] +----- + +Or: + +["source","sh",subs="attributes"] +----- +{beatname_lc} [FLAGS] +----- + +*FLAGS* + +ifeval::["{beatname_lc}"=="packetbeat"] + +*`-I, --I FILE`*:: +Reads packet data from the specified file instead of reading packets from the +network. This option is useful only for testing {beatname_uc}. ++ +["source","sh",subs="attributes"] +----- +{beatname_lc} run -I ~/pcaps/network_traffic.pcap +----- + +endif::[] + +*`-N, --N`*:: +Disables the publishing of events to the defined output. This option is useful +only for testing {beatname_uc}. + +ifeval::["{beatname_lc}"=="packetbeat"] + +*`-O, --O`*:: +Read packets one by one by pressing _Enter_ after each. This option is useful +only for testing {beatname_uc}. + +endif::[] + +*`--cpuprofile FILE`*:: +Writes CPU profile data to the specified file. This option is useful for +troubleshooting {beatname_uc}. + +ifeval::["{beatname_lc}"=="packetbeat"] + +*`-devices`*:: +Prints the list of devices that are available for sniffing and then exits. + +endif::[] + +ifeval::["{beatname_lc}"=="packetbeat"] + +*`-dump FILE`*:: +Writes all captured packets to the specified file. This option is useful for +troubleshooting {beatname_uc}. + +endif::[] + +*`-h, --help`*:: +Shows help for the `run` command. + +*`--httpprof [HOST]:PORT`*:: +Starts an http server for profiling. This option is useful for troubleshooting +and profiling {beatname_uc}. + +ifeval::["{beatname_lc}"=="packetbeat"] + +*`-l N`*:: +Reads the pcap file `N` number of times. The default is 1. Use this option in +combination with the `-I` option. For an infinite loop, use _0_. The `-l` +option is useful only for testing {beatname_uc}. + +endif::[] + +*`--memprofile FILE`*:: +Writes memory profile data to the specified output file. This option is useful +for troubleshooting {beatname_uc}. + +ifeval::["{beatname_lc}"=="filebeat"] + +*`--modules MODULE_LIST`*:: +Specifies a comma-separated list of modules to run. For example: ++ +["source","sh",subs="attributes"] +----- +{beatname_lc} run --modules nginx,mysql,system +----- ++ +Rather than specifying the list of modules every time you run {beatname_uc}, +you can use the <> command to enable and disable +specific modules. Then when you run {beatname_uc}, it will run any modules +that are enabled. + +endif::[] + +ifeval::["{beatname_lc}"=="filebeat"] + +*`--once`*:: +When the `--once` flag is used, {beatname_uc} starts all configured harvesters +and prospectors, and runs each prospector until the harvesters are closed. If +you set the `--once` flag, you should also set `close_eof` so the harvester is +closed when the end of the file is reached. By default harvesters are closed +after `close_inactive` is reached. + +endif::[] + +*`--setup`*:: +Loads the sample Kibana dashboards. If you want to load the dashboards without +running {beatname_uc}, use the <> command instead. + +ifeval::["{beatname_lc}"=="metricbeat"] + +*`--system.hostfs MOUNT_POINT`*:: + +Specifies the mount point of the host's filesystem for use in monitoring a host +from within a container. + +endif::[] + +ifeval::["{beatname_lc}"=="packetbeat"] + +*`-t`*:: +Reads packets from the pcap file as fast as possible without sleeping. Use this +option in combination with the `-I` option. The `-t` option is useful only for +testing Packetbeat. + +endif::[] + +{global-flags} + +*EXAMPLE* + +["source","sh",subs="attributes"] +----- +{beatname_lc} run -e --setup +----- + +Or: + +["source","sh",subs="attributes"] +----- +{beatname_lc} -e --setup +----- + +[[setup-command]] +==== `setup` command + +{setup-command-short-desc}. + +* The index template ensures that fields are mapped correctly in Elasticsearch. +* The Kibana dashboards make it easier for you to visualize {beatname_uc} data +in Kibana. +* The machine learning jobs contain the configuration information and metadata +necessary to analyze data for anomalies. + +Use this command instead of `run --setup` when you want to set up the +environment without actually running {beatname_uc} and ingesting data. + +*SYNOPSIS* + +["source","sh",subs="attributes"] +---- +{beatname_lc} setup [FLAGS] +---- + + +*FLAGS* + +*`--dashboards`*:: +Sets up the Kibana dashboards only. + +*`-h, --help`*:: +Shows help for the `setup` command. + +*`--machine-learning`*:: +Sets up machine learning job configurations only. + +ifeval::["{beatname_lc}"=="filebeat"] + +*`--modules MODULE_LIST`*:: +Specifies a comma-separated list of modules. Use this flag to avoid errors when +there are no modules defined in the +{beatname_lc}.yml+ file. + +endif::[] + +*`--template`*:: +Sets up the index template only. + +{global-flags} + +*EXAMPLE* + +["source","sh",subs="attributes"] +----- +{beatname_lc} setup --dashboards +----- + + +[[test-command]] +==== `test` command + +{test-command-short-desc}. + +*SYNOPSIS* + +["source","sh",subs="attributes"] +---- +{beatname_lc} test SUBCOMMAND [FLAGS] +---- + +*SUBCOMMANDS* + +*`config`*:: +Tests the configuration settings. + +ifeval::["{beatname_lc}"=="metricbeat"] + +*`modules [MODULE_NAME] [METRICSET_NAME]`*:: +Tests module settings for all configured modules. When you run this command, +{beatname_uc} does a test run that applies the current settings, retrieves the +metrics, and shows them as output. To test the settings for a specific module, +specify `MODULE_NAME`. To test the settings for a specific metricset in the +module, also specify `METRICSET_NAME`. + +endif::[] + +*`output`*:: +Tests that {beatname_uc} can connect to the output by using the +current settings. + +*FLAGS* + +*`-h, --help`*:: Shows help for the `test` command. + +{global-flags} + +ifeval::["{beatname_lc}"!="metricbeat"] + +*EXAMPLE* + +["source","sh",subs="attributes"] +----- +{beatname_lc} test config +----- + +endif::[] + +ifeval::["{beatname_lc}"=="metricbeat"] + +*EXAMPLES* + +["source","sh",subs="attributes"] +----- +{beatname_lc} test config +{beatname_lc} test modules system cpu +----- + +endif::[] + +[[version-command]] +==== `version` command + +{version-command-short-desc}. + +*SYNOPSIS* + +["source","sh",subs="attributes"] +---- +{beatname_lc} version [FLAGS] +---- + + +*FLAGS* + +*`-h, --help`*:: Shows help for the `version` command. + +{global-flags} + +*EXAMPLE* + +["source","sh",subs="attributes"] +----- +{beatname_lc} version +---- + + +[float] +[[global-flags]] +=== Global flags + +These global flags are available whenever you run {beatname_uc}. + +*`-E, --E "SETTING_NAME=VALUE"`*:: +Overrides a specific configuration setting. You can specify multiple overrides. +For example: ++ +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +{beatname_lc} -E "name=mybeat" -E "output.elasticsearch.hosts=["http://myhost:9200"]" +---------------------------------------------------------------------- ++ +This setting is applied to the currently running {beatname_uc} process. +The {beatname_uc} configuration file is not changed. + +ifeval::["{beatname_lc}"=="filebeat"] + +*`-M, --M "VAR_NAME=VALUE"`*:: Overrides the default configuration for a +{beatname_uc} module. You can specify multiple variable overrides. For example: ++ +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +{beatname_lc} -modules=nginx -M "nginx.access.var.paths=[/var/log/nginx/access.log*]" -M "nginx.access.var.pipeline=no_plugins" +---------------------------------------------------------------------- + +endif::[] + +*`-c, --c FILE`*:: +Specifies the configuration file to use for {beatname_uc}. The file you specify +here is relative to `path.config`. If the `-c` flag is not specified, the +default config file, +{beatname_lc}.yml+, is used. + +*`-d, --d SELECTORS`*:: +Enables debugging for the specified selectors. For the selectors, you can +specify a comma-separated +list of components, or you can use `-d "*"` to enable debugging for all +components. For example, `-d "publish"` displays all the "publish" related +messages. + +*`-e, --e`*:: +Logs to stderr and disables syslog/file output. + +*`--path.config`*:: +Sets the path for configuration files. See the <> section for +details. + +*`--path.data`*:: +Sets the path for data files. See the <> section for details. + +*`--path.home`*:: +Sets the path for miscellaneous files. See the <> section for +details. + +*`--path.logs`*:: +Sets the path for log files. See the <> section for details. + +*`--strict.perms`*:: +Sets strict permission checking on configuration files. The default is +`-strict.perms=true`. See +{libbeat}/config-file-permissions.html[Config file ownership and permissions] in +the _Beats Platform Reference_ for more information. + +*`-v, --v`*:: +Logs INFO-level messages. diff --git a/docs/copied-from-beats/loggingconfig.asciidoc b/docs/copied-from-beats/loggingconfig.asciidoc new file mode 100644 index 00000000000..c2af37a205a --- /dev/null +++ b/docs/copied-from-beats/loggingconfig.asciidoc @@ -0,0 +1,184 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/loggingconfig.asciidoc[] +//// Make sure this content appears below a level 2 heading. +////////////////////////////////////////////////////////////////////////// + +[[configuration-logging]] +== Set up logging + +The `logging` section of the +{beatname_lc}.yml+ config file contains options +for configuring the Beats logging output. The logging system can write logs to +the syslog or rotate log files. If logging is not explicitly configured the file +output is used. + +[source,yaml] +-------------------------------------------------------------------------------- +logging.level: info +logging.to_files: true +logging.files: + path: /var/log/{beatname_lc} + name: {beatname_lc} + keepfiles: 7 + permissions: 0644 +-------------------------------------------------------------------------------- + +TIP: In addition to setting logging options in the config file, you can modify +the logging output configuration from the command line. See +<>. + +[float] +=== Configuration options + +You can specify the following options in the `logging` section of the ++{beatname_lc}.yml+ config file: + +[float] +==== `logging.to_syslog` + +When true, writes all logging output to the syslog. + +[float] +==== `logging.to_eventlog` + +When true, writes all logging output to the Windows Event Log. + +[float] +==== `logging.to_files` + +When true, writes all logging output to files. The log files are automatically +rotated when the log file size limit is reached. + +NOTE: {beatname_uc} only creates a log file if there is logging output. For +example, if you set the log <> to `error` and there are no +errors, there will be no log file in the directory specified for logs. + +[float] +[[level]] +==== `logging.level` + +Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default +log level is `info`. + +`debug`:: Logs debug messages, including a detailed printout of all events +flushed by the Beat. Also logs informational messages, warnings, errors, and +critical errors. When the log level is `debug`, you can specify a list of +<> to display debug messages for specific components. If +no selectors are specified, the `*` selector is used to display debug messages +for all components. + +`info`:: Logs informational messages, including the number of events that are +published. Also logs any warnings, errors, or critical errors. + +`warning`:: Logs warnings, errors, and critical errors. + +`error`:: Logs errors and critical errors. + +[float] +[[selectors]] +==== `logging.selectors` + +The list of debugging-only selector tags used by different Beats components. Use `*` +to enable debug output for all components. For example add `publish` to display +all the debug messages related to event publishing. When starting the Beat, +selectors can be overwritten using the `-d` command line option (`-d` also sets +the debug log level). + +[float] +==== `logging.metrics.enabled` + +If enabled, {beatname_uc} periodically logs its internal metrics that have +changed in the last period. For each metric that changed, the delta from the +value at the beginning of the period is logged. Also, the total values for all +non-zero internal metrics are logged on shutdown. The default is true. + +Here is an example log line: + +[source,shell] +---------------------------------------------------------------------------------------------------------------------------------------------------- +2017-12-17T19:17:42.667-0500 INFO [metrics] log/log.go:110 Non-zero metrics in the last 30s: beat.info.uptime.ms=30004 beat.memstats.gc_next=5046416 +---------------------------------------------------------------------------------------------------------------------------------------------------- + +Note that we currently offer no backwards compatible guarantees for the internal +metrics and for this reason they are also not documented. + + +[float] +==== `logging.metrics.period` + +The period after which to log the internal metrics. The default is 30s. + +[float] +==== `logging.files.path` + +The directory that log files are written to. The default is the logs path. See +the <> section for details. + +[float] +==== `logging.files.name` + +The name of the file that logs are written to. By default, the name of the Beat +is used. + +[float] +==== `logging.files.rotateeverybytes` + +The maximum size of a log file. If the limit is reached, a new log file is +generated. The default size limit is 10485760 (10 MB). + +[float] +==== `logging.files.keepfiles` + +The number of most recent rotated log files to keep on disk. Older files are +deleted during log rotation. The default value is 7. The `keepfiles` options has +to be in the range of 2 to 1024 files. + +[float] +==== `logging.files.permissions` + +The permissions mask to apply when rotating log files. The default value is +0600. The `permissions` option must be a valid Unix-style file permissions mask +expressed in octal notation. In Go, numbers in octal notation must start with +'0'. + +Examples: + +* 0644: give read and write access to the file owner, and read access to all others. +* 0600: give read and write access to the file owner, and no access to all others. +* 0664: give read and write access to the file owner and members of the group +associated with the file, as well as read access to all other users. + +[float] +==== `logging.json` + +When true, logs messages in JSON format. The default is false. + +[float] +=== Logging format + +The logging format is generally the same for each logging output. The one +exception is with the syslog output where the timestamp is not included in the +message because syslog adds its own timestamp. + +Each log message consists of the following parts: + +* Timestamp in ISO8601 format +* Level +* Logger name contained in brackets (Optional) +* File name and line number of the caller +* Message +* Structured data encoded in JSON (Optional) + +Below are some samples: + +`2017-12-17T18:54:16.241-0500 INFO logp/core_test.go:13 unnamed global logger` + +`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:16 some message` + +`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}` \ No newline at end of file diff --git a/docs/copied-from-beats/shared-directory-layout.asciidoc b/docs/copied-from-beats/shared-directory-layout.asciidoc new file mode 100644 index 00000000000..d336fb986f0 --- /dev/null +++ b/docs/copied-from-beats/shared-directory-layout.asciidoc @@ -0,0 +1,99 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc. +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/shared-directory-layout.asciidoc[] +////////////////////////////////////////////////////////////////////////// + +[[directory-layout]] +=== Directory layout + +The directory layout of an installation is as follows: + +[cols="> in the configuration +file. + +==== Default paths + +{beatname_uc} uses the following default paths unless you explicitly change them. + +ifeval::["{beatname_lc}"!="winlogbeat"] + +[float] +===== deb and rpm +[cols="> section for more details. + +Here is an example configuration: + +[source,yaml] +------------------------------------------------------------------------------ +path.home: /usr/share/beat +path.config: /etc/beat +path.data: /var/lib/beat +path.logs: /var/log/ +------------------------------------------------------------------------------ + +Note that it is possible to override these options by using command line flags. + +[float] +=== Configuration options + +You can specify the following options in the `path` section of the +{beatname_lc}.yml+ config file: + +[float] +==== `home` + +The home path for the {beatname_uc} installation. This is the default base path for all +other path settings and for miscellaneous files that come with the distribution (for example, the +sample dashboards). If not set by a CLI flag or in the configuration file, the default +for the home path is the location of the {beatname_uc} binary. + +Example: + +[source,yaml] +------------------------------------------------------------------------------ +path.home: /usr/share/beats +------------------------------------------------------------------------------ + +[float] +==== `config` + +The configuration path for the {beatname_uc} installation. This is the default base path +for configuration files, including the main YAML configuration file and the +Elasticsearch template file. If not set by a CLI flag or in the configuration file, the default for the +configuration path is the home path. + +Example: + +[source,yaml] +------------------------------------------------------------------------------ +path.config: /usr/share/beats/config +------------------------------------------------------------------------------ + +[float] +==== `data` + +The data path for the {beatname_uc} installation. This is the default base path for all +the files in which {beatname_uc} needs to store its data. If not set by a CLI +flag or in the configuration file, the default for the data path is a `data` +subdirectory inside the home path. + + +Example: + +[source,yaml] +------------------------------------------------------------------------------ +path.data: /var/lib/beats +------------------------------------------------------------------------------ + +[float] +==== `logs` + +The logs path for a {beatname_uc} installation. This is the default location for the Beat's +log files. If not set by a CLI flag or in the configuration file, the default +for the logs path is a `logs` subdirectory inside the home path. + +Example: + +[source,yaml] +------------------------------------------------------------------------------ +path.logs: /var/log/beats +------------------------------------------------------------------------------ \ No newline at end of file diff --git a/docs/high-availability.asciidoc b/docs/high-availability.asciidoc index 6277d819248..82b3202f141 100644 --- a/docs/high-availability.asciidoc +++ b/docs/high-availability.asciidoc @@ -1,5 +1,4 @@ [[high-availability]] -[float] === High Availability The API exposed by APM Server is a regular HTTP JSON API. diff --git a/docs/index.asciidoc b/docs/index.asciidoc index f317d1deee9..94c714ef51e 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -15,18 +15,20 @@ please view this documentation at https://www.elastic.co/guide/en/apm/server[ela endif::[] [[apm-server]] -= APM Server Docs (Beta) += APM Server Reference (Beta) include::./overview.asciidoc[] include::./installing.asciidoc[] -include::./configuring.asciidoc[] +include::./setting-up-and-running.asciidoc[] include::./event-types.asciidoc[] include::./frontend.asciidoc[] +include::./configuring.asciidoc[] + include::./intake-api.asciidoc[] include::./fields.asciidoc[] diff --git a/docs/installing.asciidoc b/docs/installing.asciidoc index 0c35d44db4b..a922e0021a7 100644 --- a/docs/installing.asciidoc +++ b/docs/installing.asciidoc @@ -2,14 +2,14 @@ == Installing APM Server https://www.elastic.co/downloads/apm/apm-server[Download APM Server] for your operating system and extract the package. -Then follow the instructions on <> +Then follow the instructions on <> -You can also install APM Server from our repositories, run it through docker or install as a service on Windows: +You can also install APM Server from our repositories or install as a service on Windows: -* <> * <> * <> -include::./copied-from-beats/running-on-docker.asciidoc[] +To run APM Server in Docker, please see <>. + include::./copied-from-beats/repositories.asciidoc[] -include::./installing-on-windows.asciidoc[] \ No newline at end of file +include::./installing-on-windows.asciidoc[] diff --git a/docs/overview.asciidoc b/docs/overview.asciidoc index 80f6a3a6b09..5dc1184e8ab 100644 --- a/docs/overview.asciidoc +++ b/docs/overview.asciidoc @@ -29,5 +29,5 @@ and as such it shares many of the same configuration options as beats. In the following you can read more about * <> -* <> +* <> diff --git a/docs/security.asciidoc b/docs/security.asciidoc index 94505539649..c84c23ac842 100644 --- a/docs/security.asciidoc +++ b/docs/security.asciidoc @@ -1,5 +1,4 @@ [[security]] -[float] === Security APM Server exposes a HTTP endpoint and as with anything that opens ports on your servers, diff --git a/docs/setting-up-and-running.asciidoc b/docs/setting-up-and-running.asciidoc new file mode 100644 index 00000000000..628838a9377 --- /dev/null +++ b/docs/setting-up-and-running.asciidoc @@ -0,0 +1,64 @@ + +[[setting-up-and-running]] +== Set up and run APM Server + +In a production environment, +you would put APM Server on its own machines, +similar to how you run Elasticsearch. +You _can_ run it on the same machines as Elasticsearch, +but this is not recommended, +as the processes will be competing for resources. +To start APM Server, run: + +[source,bash] +---------------------------------- +./apm-server -e +---------------------------------- + +You should see APM Server start up. +It will try to connect to Elasticsearch on localhost port 9200 and expose an API to agents on port 8200. +You can change the defaults by supplying a different address on the command line: + +[source,bash] +---------------------------------- +./apm-server -e -E output.elasticsearch.hosts=ElasticsearchAddress:9200 -E apm-server.host=localhost:8200 +---------------------------------- + +Or you can update the `apm-server.yml` configuration file to change the defaults. + +[source,yaml] +---------------------------------- +apm-server: + host: localhost:8200 + +output: + elasticsearch: + hosts: ElasticsearchAddress:9200 +---------------------------------- + + +NOTE: If you are using an X-Pack secured version of Elastic Stack, +you need to specify credentials in the config file before you run the commands that set up and start APM Server. +For example: + +[source,yaml] +---- +output.elasticsearch: + hosts: ["ElasticsearchAddress:9200"] + username: "elastic" + password: "elastic" +---- + +See https://github.com/elastic/apm-server/blob/{doc-branch}/apm-server.reference.yml[`apm-server.reference.yml`] for more configuration options. + + +include::./high-availability.asciidoc[] + +include::./security.asciidoc[] + +include::./copied-from-beats/command-reference.asciidoc[] + +include::./copied-from-beats/shared-directory-layout.asciidoc[] + + +include::./copied-from-beats/running-on-docker.asciidoc[] From c59568ba0dea3a76e44ae110e0a7848ff7a1d912 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 25 Jan 2018 20:21:30 +0100 Subject: [PATCH 02/31] Moved some sections around and updated with more generic beats docs --- docs/configuring.asciidoc | 19 +- docs/copied-from-beats/dashboards.asciidoc | 83 ++ .../dashboardsconfig.asciidoc | 99 ++ docs/copied-from-beats/outputconfig.asciidoc | 1170 +++++++++++++++++ .../shared-configuring.asciidoc | 26 + .../shared-kibana-config.asciidoc | 108 ++ .../shared-ssl-config.asciidoc | 170 +++ .../shared-template-load.asciidoc | 254 ++++ .../template-config.asciidoc | 84 ++ docs/index.asciidoc | 1 + docs/overview.asciidoc | 2 +- docs/setting-up-and-running.asciidoc | 3 +- 12 files changed, 2015 insertions(+), 4 deletions(-) create mode 100644 docs/copied-from-beats/dashboards.asciidoc create mode 100644 docs/copied-from-beats/dashboardsconfig.asciidoc create mode 100644 docs/copied-from-beats/outputconfig.asciidoc create mode 100644 docs/copied-from-beats/shared-configuring.asciidoc create mode 100644 docs/copied-from-beats/shared-kibana-config.asciidoc create mode 100644 docs/copied-from-beats/shared-ssl-config.asciidoc create mode 100644 docs/copied-from-beats/shared-template-load.asciidoc create mode 100644 docs/copied-from-beats/template-config.asciidoc diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index d91b5b7dd1e..59a1f5cdbe4 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -1,9 +1,10 @@ -[[configure]] +[[apm-server-configuration]] = Configure APM Server - [partintro] -- +include::./copied-from-beats/shared-configuring.asciidoc[] + The following explains how to configure APM Server: * <> @@ -11,11 +12,25 @@ The following explains how to configure APM Server: * <> -- +:only-elasticsearch: +:no-pipeline: +include::./copied-from-beats/outputconfig.asciidoc[] + +include::./copied-from-beats/shared-ssl-config.asciidoc[] +include::./template-config.asciidoc[] include::./copied-from-beats/loggingconfig.asciidoc[] +include::./copied-from-beats/dashboardsconfig.asciidoc[] + +include::./copied-from-beats/shared-kibana-config.asciidoc[] + :standalone: include::./copied-from-beats/shared-env-vars.asciidoc[] include::./copied-from-beats/shared-path-config.asciidoc[] + + + + diff --git a/docs/copied-from-beats/dashboards.asciidoc b/docs/copied-from-beats/dashboards.asciidoc new file mode 100644 index 00000000000..eb1f7959b68 --- /dev/null +++ b/docs/copied-from-beats/dashboards.asciidoc @@ -0,0 +1,83 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc. +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/dashboards.asciidoc[] +////////////////////////////////////////////////////////////////////////// + + +{beatname_uc} comes packaged with example Kibana dashboards, visualizations, +and searches for visualizing {beatname_uc} data in Kibana. Before you can use +the dashboards, you need to create the index pattern, +{beat_default_index_prefix}-*+, and +load the dashboards into Kibana. To do this, you can either run the `setup` +command (as described here) or +<> in the ++{beatname_lc}.yml+ config file. + +NOTE: Starting with {beatname_uc} 6.0.0, the dashboards are loaded via the Kibana API. +This requires a Kibana endpoint configuration. You should have configured the +endpoint earlier when you +<<{beatname_lc}-configuration,configured {beatname_uc}>>. If you didn't, +configure it now. + +Make sure Kibana is running before you perform this step. If you are accessing a +secured Kibana instance, make sure you've configured credentials as described in +<<{beatname_lc}-configuration>>. + +To set up the Kibana dashboards for {beatname_uc}, use the appropriate command +for your system. + +ifdef::allplatforms[] + +ifeval::["{requires-sudo}"=="yes"] + +include::../../libbeat/docs/shared-note-sudo.asciidoc[] + +endif::[] + +*deb and rpm:* + +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +{beatname_lc} setup --dashboards +---------------------------------------------------------------------- + + +*mac:* + +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +./{beatname_lc} setup --dashboards +---------------------------------------------------------------------- + + +ifeval::["{beatname_lc}"!="auditbeat"] + +*docker:* + +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +docker run {dockerimage} setup --dashboards +---------------------------------------------------------------------- + +endif::[] + +*win:* + +endif::allplatforms[] + +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon +and select *Run As Administrator*). If you are running Windows XP, you may need +to download and install PowerShell. + +From the PowerShell prompt, change to the directory where you installed {beatname_uc}, +and run: + +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +PS > {beatname_lc} setup --dashboards +---------------------------------------------------------------------- diff --git a/docs/copied-from-beats/dashboardsconfig.asciidoc b/docs/copied-from-beats/dashboardsconfig.asciidoc new file mode 100644 index 00000000000..03a24c41c5d --- /dev/null +++ b/docs/copied-from-beats/dashboardsconfig.asciidoc @@ -0,0 +1,99 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/dashboardsconfig.asciidoc[] +////////////////////////////////////////////////////////////////////////// + +[[configuration-dashboards]] +== Load the Kibana dashboards + +{beatname_uc} comes packaged with example Kibana dashboards, visualizations, +and searches for visualizing {beatname_uc} data in Kibana. + +To load the dashboards, you can either enable dashboard loading in the +`setup.dashboards` section of the +{beatname_lc}.yml+ config file, or you can +run the `setup` command. Dashboard loading is disabled by default. + +When dashboard loading is enabled, {beatname_uc} uses the Kibana API to load the +sample dashboards. Dashboard loading is only attempted at Beat startup. +If Kibana is not available at startup, {beatname_uc} will stop with an error. + +To enable dashboard loading, add the following setting to the config file: + +[source,yaml] +------------------------------------------------------------------------------ +setup.dashboards.enabled: true +------------------------------------------------------------------------------ + +[float] +=== Configuration options + +You can specify the following options in the `setup.dashboards` section of the ++{beatname_lc}.yml+ config file: + +[float] +==== `setup.dashboards.enabled` + +If this option is set to true, {beatname_uc} loads the sample Kibana dashboards +automatically on startup. If no other options are set, the dashboard are loaded +from the local `kibana` directory in the home path of the installation. + +To load dashboards from a different location, you can configure one of the +following options: <>, +<>, or +<>. + +[float] +[[directory-option]] +==== `setup.dashboards.directory` + +The directory that contains the dashboards to load. The default is the `kibana` +folder in the home path. + +[float] +[[url-option]] +==== `setup.dashboards.url` + +The URL to use for downloading the dashboard archive. If this option +is set, {beatname_uc} downloads the dashboard archive from the specified URL +instead of using the local directory. + +[float] +[[file-option]] +==== `setup.dashboards.file` + +The file archive (zip file) that contains the dashboards to load. If this option +is set, {beatname_uc} looks for a dashboard archive in the specified path +instead of using the local directory. + +[float] +==== `setup.dashboards.beat` + +In case the archive contains the dashboards for multiple Beats, this setting +lets you select the Beat for which you want to load dashboards. To load all the +dashboards in the archive, set this option to an empty string. The default is ++"{beatname_lc}"+. + +[float] +==== `setup.dashboards.kibana_index` + +The name of the Kibana index to use for setting the configuration. The default +is `".kibana"` + + +[float] +==== `setup.dashboards.index` + +The Elasticsearch index name. This setting overwrites the index name defined +in the dashboards and index pattern. Example: `"testbeat-*"` + +[float] +==== `setup.dashboards.always_kibana` + +Force loading of dashboards using the Kibana API without querying Elasticsearch for the version +The default is `false`. diff --git a/docs/copied-from-beats/outputconfig.asciidoc b/docs/copied-from-beats/outputconfig.asciidoc new file mode 100644 index 00000000000..46729c24d63 --- /dev/null +++ b/docs/copied-from-beats/outputconfig.asciidoc @@ -0,0 +1,1170 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc. +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/outputconfig.asciidoc[] +//// Make sure this content appears below a level 2 heading. +////////////////////////////////////////////////////////////////////////// + +[[configuring-output]] +== Configure the output + +You configure {beatname_uc} to write to a specific output by setting options +in the `output` section of the +{beatname_lc}.yml+ config file. Only a single +output may be defined. + +The following topics describe how to configure each supported output: + +* <> + +ifndef::only-elasticsearch[] +* <> +* <> +* <> +* <> +* <> +endif::[] + + +[[elasticsearch-output]] +=== Configure the Elasticsearch output + +++++ +Elasticsearch +++++ + +When you specify Elasticsearch for the output, {beatname_uc} sends the transactions directly to Elasticsearch by using the Elasticsearch HTTP API. + +Example configuration: + +["source","yaml",subs="attributes"] +------------------------------------------------------------------------------ + +output.elasticsearch: + hosts: ["http://localhost:9200"] + index: "{beatname_lc}-%{[beat.version]}-%{+yyyy.MM.dd}" + ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + ssl.certificate: "/etc/pki/client/cert.pem" + ssl.key: "/etc/pki/client/cert.key" +------------------------------------------------------------------------------ + +To enable SSL, just add `https` to all URLs defined under __hosts__. + +["source","yaml",subs="attributes,callouts"] +------------------------------------------------------------------------------ + +output.elasticsearch: + hosts: ["https://localhost:9200"] + username: "admin" + password: "s3cr3t" +------------------------------------------------------------------------------ + +If the Elasticsearch nodes are defined by `IP:PORT`, then add `protocol: https` to the yaml file. + +[source,yaml] +------------------------------------------------------------------------------ +output.elasticsearch: + hosts: ["localhost"] + protocol: "https" + username: "admin" + password: "s3cr3t" + +------------------------------------------------------------------------------ + +==== Compatibility + +This output works with all compatible versions of Elasticsearch. See "Supported Beats Versions" in the https://www.elastic.co/support/matrix#show_compatibility[Elastic Support Matrix]. + +==== Configuration options + +You can specify the following options in the `elasticsearch` section of the +{beatname_lc}.yml+ config file: + +===== `enabled` + +The enabled config is a boolean setting to enable or disable the output. If set +to false, the output is disabled. + +The default value is true. + + +[[hosts-option]] +===== `hosts` + +The list of Elasticsearch nodes to connect to. The events are distributed to +these nodes in round robin order. If one node becomes unreachable, the event is +automatically sent to another node. Each Elasticsearch node can be defined as a `URL` or `IP:PORT`. +For example: `http://192.15.3.2`, `https://es.found.io:9230` or `192.24.3.2:9300`. +If no port is specified, `9200` is used. + +NOTE: When a node is defined as an `IP:PORT`, the _scheme_ and _path_ are taken from the +<> and <> config options. + +[source,yaml] +------------------------------------------------------------------------------ +output.elasticsearch: + hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] + protocol: https + path: /elasticsearch +------------------------------------------------------------------------------ + +In the previous example, the Elasticsearch nodes are available at `https://10.45.3.2:9220/elasticsearch` and +`https://10.45.3.1:9230/elasticsearch`. + +===== `compression_level` + +The gzip compression level. Setting this value to 0 disables compression. +The compression level must be in the range of 1 (best speed) to 9 (best compression). + +Increasing the compression level will reduce the network usage but will increase the cpu usage. + +The default value is 0. + +===== `worker` + +The number of workers per configured host publishing events to Elasticsearch. This +is best used with load balancing mode enabled. Example: If you have 2 hosts and +3 workers, in total 6 workers are started (3 for each host). + +===== `username` + +The basic authentication username for connecting to Elasticsearch. + +===== `password` + +The basic authentication password for connecting to Elasticsearch. + +===== `parameters` + +Dictionary of HTTP parameters to pass within the url with index operations. + +[[protocol-option]] +===== `protocol` + +The name of the protocol Elasticsearch is reachable on. The options are: +`http` or `https`. The default is `http`. However, if you specify a URL for +<>, the value of `protocol` is overridden by whatever scheme you +specify in the URL. + +[[path-option]] +===== `path` + +An HTTP path prefix that is prepended to the HTTP API calls. This is useful for +the cases where Elasticsearch listens behind an HTTP reverse proxy that exports +the API under a custom prefix. + +===== `headers` + +Custom HTTP headers to add to each request created by the Elasticsearch output. +Example: + +[source,yaml] +------------------------------------------------------------------------------ +output.elasticsearch.headers: + X-My-Header: Header contents +------------------------------------------------------------------------------ + +It is generally possible to specify multiple header values for the same header +name by separating them with a comma. + +===== `proxy_url` + +The URL of the proxy to use when connecting to the Elasticsearch servers. The +value may be either a complete URL or a "host[:port]", in which case the "http" +scheme is assumed. If a value is not specified through the configuration file +then proxy environment variables are used. See the +https://golang.org/pkg/net/http/#ProxyFromEnvironment[golang documentation] +for more information about the environment variables. + +[[index-option-es]] +===== `index` + +The index name to write events to. The default is ++"{beatname_lc}-%\{[beat.version]\}-%\{+yyyy.MM.dd\}"+ (for example, ++"{beatname_lc}-{version}-2017.04.26"+). If you change this setting, you also +need to configure the `setup.template.name` and `setup.template.pattern` options +(see <>). If you are using the pre-built Kibana +dashboards, you also need to set the `setup.dashboards.index` option (see +<>). + + +===== `indices` + +Array of index selector rules supporting conditionals, format string +based field access and name mappings. The first rule matching will be used to +set the `index` for the event to be published. If `indices` is missing or no +rule matches, the `index` field will be used. + +Rule settings: + +*`index`*: The index format string to use. If the fields used are missing, the rule fails. + +*`mapping`*: Dictionary mapping index names to new names + +*`default`*: Default string value if `mapping` does not find a match. + +*`when`*: Condition which must succeed in order to execute the current rule. + +Examples elasticsearch output with `indices`: + +["source","yaml"] +------------------------------------------------------------------------------ +output.elasticsearch: + hosts: ["http://localhost:9200"] + index: "logs-%{[beat.version]}-%{+yyyy.MM.dd}" + indices: + - index: "critical-%{[beat.version]}-%{+yyyy.MM.dd}" + when.contains: + message: "CRITICAL" + - index: "error-%{[beat.version]}-%{+yyyy.MM.dd}" + when.contains: + message: "ERR" +------------------------------------------------------------------------------ + +ifndef::no-pipeline[] +===== `pipeline` + +A format string value that specifies the ingest node pipeline to write events to. + +["source","yaml"] +------------------------------------------------------------------------------ +output.elasticsearch: + hosts: ["http://localhost:9200"] + pipeline: my_pipeline_id +------------------------------------------------------------------------------ + +For more information, see <>. + +===== `pipelines` + +Similar to the `indices` array, this is an array of pipeline selector +configurations supporting conditionals, format string based field access +and name mappings. The first rule matching will be used to set the +`pipeline` for the event to be published. If `pipelines` is missing or +no rule matches, the `pipeline` field will be used. + +Example elasticsearch output with `pipelines`: + +["source","yaml"] +------------------------------------------------------------------------------ +filebeat.prospectors: +- paths: ["/var/log/app/normal/*.log"] + fields: + type: "normal" +- paths: ["/var/log/app/critical/*.log"] + fields: + type: "critical" + +output.elasticsearch: + hosts: ["http://localhost:9200"] + index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}" + pipelines: + - pipeline: critical_pipeline + when.equals: + fields.type: "critical" + - pipeline: normal_pipeline + when.equals: + fields.type: "normal" +------------------------------------------------------------------------------ +endif::[] + +===== `max_retries` + +The number of times to retry publishing an event after a publishing failure. +After the specified number of retries, the events are typically dropped. +Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all +events are published. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + +===== `bulk_max_size` + +The maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 50. + +Events can be collected into batches. {beatname_uc} will split batches larger than `bulk_max_size` +into multiple batches. + +Specifying a larger batch size can improve performance by lowering the overhead of sending events. +However big batch sizes can also increase processing times, which might result in +API errors, killed connections, timed-out publishing requests, and, ultimately, lower +throughput. + +Setting `bulk_max_size` to values less than or equal to 0 disables the +splitting of batches. When splitting is disabled, the queue decides on the +number of events to be contained in a batch. + +===== `timeout` + +The http request timeout in seconds for the Elasticsearch request. The default is 90. + +===== `ssl` + +Configuration options for SSL parameters like the certificate authority to use +for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to +Elasticsearch. + +See <> for more information. + +ifndef::only-elasticsearch[] + +[[logstash-output]] +=== Configure the Logstash output + +++++ +Logstash +++++ + +The Logstash output sends events directly to Logstash by using the lumberjack +protocol, which runs over TCP. Logstash allows for additional processing and routing of +generated events. + +include::../../libbeat/docs/shared-logstash-config.asciidoc[] + +==== Accessing metadata fields + +Every event sent to Logstash contains the following metadata fields that you can +use in Logstash for indexing and filtering: + +["source","json",subs="attributes"] +------------------------------------------------------------------------------ +{ + ... + "@metadata": { <1> + "beat": "{beatname_lc}", <2> + "version": "{stack-version}" <3> + "type": "doc" <4> + } +} +------------------------------------------------------------------------------ +<1> {beatname_uc} uses the `@metadata` field to send metadata to Logstash. See the +{logstashdoc}/event-dependent-configuration.html#metadata[Logstash documentation] +for more about the `@metadata` field. +<2> The default is {beatname_lc}. To change this value, set the +<> option in the {beatname_uc} config file. +<3> The beats current version. +<4> The value of `type` is currently hardcoded to `doc`. It was used by previous +Logstash configs to set the type of the document in Elasticsearch. + + +WARNING: The `@metadata.type` field, added by the Logstash output, is +deprecated, hardcoded to `doc`, and will be removed in {beatname_uc} 7.0. + +You can access this metadata from within the Logstash config file to set values +dynamically based on the contents of the metadata. + +For example, the following Logstash configuration file for versions 2.x and +5.x sets Logstash to use the index and document type reported by Beats for +indexing events into Elasticsearch: + +[source,logstash] +------------------------------------------------------------------------------ + +input { + beats { + port => 5044 + } +} + +output { + elasticsearch { + hosts => ["http://localhost:9200"] + index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" <1> + } +} +------------------------------------------------------------------------------ +<1> `%{[@metadata][beat]}` sets the first part of the index name to the value +of the `beat` metadata field, `%{[@metadata][version]}` sets the second part to +the Beat's version, and `%{+YYYY.MM.dd}` sets the third part of the +name to a date based on the Logstash `@timestamp` field. For example: ++{beatname_lc}-{version}-2017.03.29+. + +Events indexed into Elasticsearch with the Logstash configuration shown here +will be similar to events directly indexed by Beats into Elasticsearch. + + +==== Compatibility + +This output works with all compatible versions of Logstash. See "Supported Beats Versions" in the https://www.elastic.co/support/matrix#show_compatibility[Elastic Support Matrix]. + +==== Configuration options + +You can specify the following options in the `logstash` section of the ++{beatname_lc}.yml+ config file: + +===== `enabled` + +The enabled config is a boolean setting to enable or disable the output. If set +to false, the output is disabled. + +The default value is true. + +[[hosts]] +===== `hosts` + +The list of known Logstash servers to connect to. If load balancing is disabled, but +multiple hosts are configured, one host is selected randomly (there is no precedence). +If one host becomes unreachable, another one is selected randomly. + +All entries in this list can contain a port number. If no port number is given, the +value specified for <> is used as the default port number. + +===== `compression_level` + +The gzip compression level. Setting this value to 0 disables compression. +The compression level must be in the range of 1 (best speed) to 9 (best compression). + +Increasing the compression level will reduce the network usage but will increase the cpu usage. + +The default value is 3. + +===== `worker` + +The number of workers per configured host publishing events to Logstash. This +is best used with load balancing mode enabled. Example: If you have 2 hosts and +3 workers, in total 6 workers are started (3 for each host). + +[[loadbalance]] +===== `loadbalance` + +If set to true and multiple Logstash hosts are configured, the output plugin +load balances published events onto all Logstash hosts. If set to false, +the output plugin sends all events to only one host (determined at random) and +will switch to another host if the selected one becomes unresponsive. The default value is false. + +===== `ttl` + +Time to live for a connection to Logstash after which the connection will be re-established. +Useful when Logstash hosts represent load balancers. Since the connections to Logstash hosts +are sticky operating behind load balancers can lead to uneven load distribution between the instances. +Specifying a TTL on the connection allows to achieve equal connection distribution between the +instances. Specifying a TTL of 0 will disable this feature. + +The default value is 0. + +NOTE: The "ttl" option is not yet supported on an async Logstash client (one with the "pipelining" option set). + +["source","yaml",subs="attributes"] +------------------------------------------------------------------------------ +output.logstash: + hosts: ["localhost:5044", "localhost:5045"] + loadbalance: true + index: {beatname_lc} +------------------------------------------------------------------------------ + +===== `pipelining` + +Configures number of batches to be sent asynchronously to logstash while waiting +for ACK from logstash. Output only becomes blocking once number of `pipelining` +batches have been written. Pipelining is disabled if a values of 0 is +configured. The default value is 5. + +[[port]] +===== `port` + +deprecated[5.0.0] + +The default port to use if the port number is not given in <>. The default port number +is 10200. + +===== `proxy_url` + +The URL of the SOCKS5 proxy to use when connecting to the Logstash servers. The +value must be a URL with a scheme of `socks5://`. The protocol used to +communicate to Logstash is not based on HTTP so a web-proxy cannot be used. + +If the SOCKS5 proxy server requires client authentication, then a username and +password can be embedded in the URL as shown in the example. + +When using a proxy, hostnames are resolved on the proxy server instead of on the +client. You can change this behavior by setting the +<> option. + +["source","yaml",subs="attributes"] +------------------------------------------------------------------------------ +output.logstash: + hosts: ["remote-host:5044"] + proxy_url: socks5://user:password@socks5-proxy:2233 +------------------------------------------------------------------------------ + +[[logstash-proxy-use-local-resolver]] +===== `proxy_use_local_resolver` + +The `proxy_use_local_resolver` option determines if Logstash hostnames are +resolved locally when using a proxy. The default value is false which means +that when a proxy is used the name resolution occurs on the proxy server. + +[[logstash-index]] +===== `index` + +The index root name to write events to. The default is the Beat name. For +example +"{beatname_lc}"+ generates +"[{beatname_lc}-]{version}-YYYY.MM.DD"+ +indices (for example, +"{beatname_lc}-{version}-2017.04.26"+). + +===== `ssl` + +Configuration options for SSL parameters like the root CA for Logstash connections. See +<> for more information. To use SSL, you must also configure the +https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html[Beats input plugin for Logstash] to use SSL/TLS. + +===== `timeout` + +The number of seconds to wait for responses from the Logstash server before timing out. The default is 30 (seconds). + +===== `max_retries` + +The number of times to retry publishing an event after a publishing failure. +After the specified number of retries, the events are typically dropped. +Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all +events are published. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + +===== `bulk_max_size` + +The maximum number of events to bulk in a single Logstash request. The default is 2048. + +If the Beat sends single events, the events are collected into batches. If the Beat publishes +a large batch of events (larger than the value specified by `bulk_max_size`), the batch is +split. + +Specifying a larger batch size can improve performance by lowering the overhead of sending events. +However big batch sizes can also increase processing times, which might result in +API errors, killed connections, timed-out publishing requests, and, ultimately, lower +throughput. + +Setting `bulk_max_size` to values less than or equal to 0 disables the +splitting of batches. When splitting is disabled, the queue decides on the +number of events to be contained in a batch. + + +===== `slow_start` + +If enabled only a subset of events in a batch of events is transferred per transaction. +The number of events to be sent increases up to `bulk_max_size` if no error is encountered. +On error the number of events per transaction is reduced again. + +The default is `false`. + +[[kafka-output]] +=== Configure the Kafka output + +++++ +Kafka +++++ + +The Kafka output sends the events to Apache Kafka. + +Example configuration: + +[source,yaml] +------------------------------------------------------------------------------ +output.kafka: + # initial brokers for reading cluster metadata + hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"] + + # message topic selection + partitioning + topic: '%{[fields.log_topic]}' + partition.round_robin: + reachable_only: false + + required_acks: 1 + compression: gzip + max_message_bytes: 1000000 +------------------------------------------------------------------------------ + +NOTE: Events bigger than <> will be dropped. To avoid this problem, make sure {beatname_uc} does not generate events bigger than <>. + +==== Compatibility + +This output works with Kafka 0.8, 0.9, and 0.10. + +==== Configuration options + +You can specify the following options in the `kafka` section of the +{beatname_lc}.yml+ config file: + +===== `enabled` + +The `enabled` config is a boolean setting to enable or disable the output. If set +to false, the output is disabled. + +The default value is true. + +===== `hosts` + +The list of Kafka broker addresses from where to fetch the cluster metadata. +The cluster metadata contain the actual Kafka brokers events are published to. + +===== `version` + +Kafka version ${beatname_lc} is assumed to run against. Defaults to oldest +supported stable version (currently version 0.8.2.0). + +Event timestamps will be added, if version 0.10.0.0+ is enabled. + +Valid values are all kafka releases in between `0.8.2.0` and `0.11.0.0`. + +===== `username` + +The username for connecting to Kafka. If username is configured, the password +must be configured as well. Only SASL/PLAIN is supported. + +===== `password` + +The password for connecting to Kafka. + +===== `topic` + +The Kafka topic used for produced events. The setting can be a format string +using any event field. For example, you can use the +<> configuration option to add a custom +field called `log_topic` to the event, and then set `topic` to the value of the +custom field: + +[source,yaml] +----- +topic: '%{[fields.log_topic]}' +----- + + +===== `topics` + +Array of topic selector rules supporting conditionals, format string +based field access and name mappings. The first rule matching will be used to +set the `topic` for the event to be published. If `topics` is missing or no +rule matches, the `topic` field will be used. + +Rule settings: + +*`topic`*: The topic format string to use. If the fields used are missing, the + rule fails. + +*`mapping`*: Dictionary mapping index names to new names + +*`default`*: Default string value if `mapping` does not find a match. + +*`when`*: Condition which must succeed in order to execute the current rule. + +===== `key` + +Optional Kafka event key. If configured, the event key must be unique and can be extracted from the event using a format string. + +===== `partition` + +Kafka output broker event partitioning strategy. Must be one of `random`, +`round_robin`, or `hash`. By default the `hash` partitioner is used. + +*`random.group_events`*: Sets the number of events to be published to the same + partition, before the partitioner selects a new partition by random. The + default value is 1 meaning after each event a new partition is picked randomly. + +*`round_robin.group_events`*: Sets the number of events to be published to the + same partition, before the partitioner selects the next partition. The default + value is 1 meaning after each event the next partition will be selected. + +*`hash.hash`*: List of fields used to compute the partitioning hash value from. + If no field is configured, the events `key` value will be used. + +*`hash.random`*: Randomly distribute events if no hash or key value can be computed. + +All partitioners will try to publish events to all partitions by default. If a +partition's leader becomes unreachable for the beat, the output might block. All +partitioners support setting `reachable_only` to overwrite this +behavior. If `reachable_only` is set to `true`, events will be published to +available partitions only. + +NOTE: Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed. + +===== `client_id` + +The configurable ClientID used for logging, debugging, and auditing purposes. The default is "beats". + +===== `worker` + +The number of concurrent load-balanced Kafka output workers. + +===== `codec` + +Output codec configuration. If the `codec` section is missing, events will be json encoded. + +See <> for more information. + +===== `metadata` + +Kafka metadata update settings. The metadata do contain information about +brokers, topics, partition, and active leaders to use for publishing. + +*`refresh_frequency`*:: Metadata refresh interval. Defaults to 10 minutes. + +*`retry.max`*:: Total number of metadata update retries when cluster is in middle of leader election. The default is 3. + +*`retry.backoff`*:: Waiting time between retries during leader elections. Default is 250ms. + +===== `max_retries` + +The number of times to retry publishing an event after a publishing failure. +After the specified number of retries, the events are typically dropped. +Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all +events are published. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + +===== `bulk_max_size` + +The maximum number of events to bulk in a single Kafka request. The default is 2048. + +===== `timeout` + +The number of seconds to wait for responses from the Kafka brokers before timing +out. The default is 30 (seconds). + +===== `broker_timeout` + +The maximum duration a broker will wait for number of required ACKs. The default is 10s. + +===== `channel_buffer_size` + +Per Kafka broker number of messages buffered in output pipeline. The default is 256. + +===== `keep_alive` + +The keep-alive period for an active network connection. If 0s, keep-alives are disabled. The default is 0 seconds. + +===== `compression` + +Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `gzip`. The default is `gzip`. + +[[kafka-max_message_bytes]] +===== `max_message_bytes` + +The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker's `message.max.bytes`. + +===== `required_acks` + +The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1. + +Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error. + +===== `ssl` + +Configuration options for SSL parameters like the root CA for Kafka connections. See +<> for more information. + +[[redis-output]] +=== Configure the Redis output + +++++ +Redis +++++ + +The Redis output inserts the events into a Redis list or a Redis channel. +This output plugin is compatible with +the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html[Redis input plugin] for Logstash. + +Example configuration: + +["source","yaml",subs="attributes"] +------------------------------------------------------------------------------ +output.redis: + hosts: ["localhost"] + password: "my_password" + key: "{beatname_lc}" + db: 0 + timeout: 5 +------------------------------------------------------------------------------ + +==== Compatibility + +This output works with Redis 3.2.4. + +==== Configuration options + +You can specify the following options in the `redis` section of the +{beatname_lc}.yml+ config file: + +===== `enabled` + +The enabled config is a boolean setting to enable or disable the output. If set +to false, the output is disabled. + +The default value is true. + +===== `hosts` + +The list of Redis servers to connect to. If load balancing is enabled, the events are +distributed to the servers in the list. If one server becomes unreachable, the events are +distributed to the reachable servers only. You can define each Redis server by specifying +`HOST` or `HOST:PORT`. For example: `"192.15.3.2"` or `"test.redis.io:12345"`. If you +don't specify a port number, the value configured by `port` is used. + +===== `port` + +deprecated[5.0.0] + +The Redis port to use if `hosts` does not contain a port number. The default is 6379. + +===== `index` + +deprecated[5.0.0,The `index` setting is renamed to `key`] + +The name of the Redis list or channel the events are published to. The default is +"{beatname_lc}". + +===== `key` + +The name of the Redis list or channel the events are published to. The default is +"{beatname_lc}". + +The redis key can be set dynamically using a format string accessing any +fields in the event to be published. + +This configuration will use the `fields.list` field to set the redis list key. If +`fields.list` is missing, `fallback` will be used. + +["source","yaml"] +------------------------------------------------------------------------------ +output.redis: + hosts: ["localhost"] + key: "%{[fields.list]:fallback}" +------------------------------------------------------------------------------ + +===== `keys` + +Array of key selector configurations supporting conditionals, format string +based field access and name mappings. The first rule matching will be used to +set the `key` for the event to be published. If `keys` is missing or no +rule matches, the `key` field will be used. + +Rule settings: + +*`key`*: The key format string. If the fields used in the format string are missing, the rule fails. + +*`mapping`*: Dictionary mapping key values to new names + +*`default`*: Default string value if `mapping` does not find a match. + +*`when`*: Condition which must succeed in order to execute the current rule. + +Example `keys` settings: + +["source","yaml"] +------------------------------------------------------------------------------ +output.redis: + hosts: ["localhost"] + key: "default_list" + keys: + - key: "info_list" # send to info_list if `message` field contains INFO + when.contains: + message: "INFO" + - key: "debug_list" # send to debug_list if `message` field contains DEBUG + when.contains: + message: "DEBUG" + - key: "%{[fields.list]}" + mapping: + "http": "frontend_list" + "nginx": "frontend_list" + "mysql": "backend_list" +------------------------------------------------------------------------------ + +===== `password` + +The password to authenticate with. The default is no authentication. + +===== `db` + +The Redis database number where the events are published. The default is 0. + +===== `datatype` + +The Redis data type to use for publishing events.If the data type is `list`, the +Redis RPUSH command is used and all events are added to the list with the key defined under `key`. +If the data type `channel` is used, the Redis `PUBLISH` command is used and means that all events +are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under `key`. +The default value is `list`. + +===== `codec` + +Output codec configuration. If the `codec` section is missing, events will be json encoded. + +See <> for more information. + +===== `host_topology` + +deprecated[5.0.0] + +The Redis host to connect to when using topology map support. Topology map support is disabled if this option is not set. + +===== `password_topology` + +deprecated[5.0.0] + +The password to use for authenticating with the Redis topology server. The default is no authentication. + +===== `db_topology` + +deprecated[5.0.0] + +The Redis database number where the topology information is stored. The default is 1. + +===== `worker` + +The number of workers to use for each host configured to publish events to Redis. Use this setting along with the +`loadbalance` option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). + +===== `loadbalance` + +If set to true and multiple hosts or workers are configured, the output plugin load balances published events onto all +Redis hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch +to another host if the currently selected one becomes unreachable. The default value is true. + +===== `timeout` + +The Redis connection timeout in seconds. The default is 5 seconds. + +===== `max_retries` + +The number of times to retry publishing an event after a publishing failure. +After the specified number of retries, the events are typically dropped. +Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all +events are published. + +Set `max_retries` to a value less than 0 to retry until all events are published. + +The default is 3. + +===== `bulk_max_size` + +The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. + +If the Beat sends single events, the events are collected into batches. If the +Beat publishes a large batch of events (larger than the value specified by +`bulk_max_size`), the batch is split. + +Specifying a larger batch size can improve performance by lowering the overhead +of sending events. However big batch sizes can also increase processing times, +which might result in API errors, killed connections, timed-out publishing +requests, and, ultimately, lower throughput. + +Setting `bulk_max_size` to values less than or equal to 0 disables the +splitting of batches. When splitting is disabled, the queue decides on the +number of events to be contained in a batch. + +===== `ssl` + +Configuration options for SSL parameters like the root CA for Redis connections +guarded by SSL proxies (for example https://www.stunnel.org[stunnel]). See +<> for more information. + +===== `proxy_url` + +The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The +value must be a URL with a scheme of `socks5://`. You cannot use a web proxy +because the protocol used to communicate with Redis is not based on HTTP. + +If the SOCKS5 proxy server requires client authentication, you can embed +a username and password in the URL. + +When using a proxy, hostnames are resolved on the proxy server instead of on the +client. You can change this behavior by setting the +<> option. + +[[redis-proxy-use-local-resolver]] +===== `proxy_use_local_resolver` + +This option determines whether Redis hostnames are resolved locally when using a proxy. +The default value is false, which means that name resolution occurs on the proxy server. + +[[file-output]] +=== Configure the File output + +++++ +File +++++ + +The File output dumps the transactions into a file where each transaction is in a JSON format. +Currently, this output is used for testing, but it can be used as input for +Logstash. + +["source","yaml",subs="attributes"] +------------------------------------------------------------------------------ +output.file: + path: "/tmp/{beatname_lc}" + filename: {beatname_lc} + #rotate_every_kb: 10000 + #number_of_files: 7 + #permissions: 0600 +------------------------------------------------------------------------------ + +==== Configuration options + +You can specify the following options in the `file` section of the +{beatname_lc}.yml+ config file: + +===== `enabled` + +The enabled config is a boolean setting to enable or disable the output. If set +to false, the output is disabled. + +The default value is true. + +[[path]] +===== `path` + +The path to the directory where the generated files will be saved. This option is +mandatory. + +===== `filename` + +The name of the generated files. The default is set to the Beat name. For example, the files +generated by default for {beatname_uc} would be "{beatname_lc}", "{beatname_lc}.1", "{beatname_lc}.2", and so on. + +===== `rotate_every_kb` + +The maximum size in kilobytes of each file. When this size is reached, the files are +rotated. The default value is 10240 KB. + +===== `number_of_files` + +The maximum number of files to save under <>. When this number of files is reached, the +oldest file is deleted, and the rest of the files are shifted from last to first. The default +is 7 files. + +===== `permissions` + +Permissions to use for file creation. The default is 0600. + +===== `codec` + +Output codec configuration. If the `codec` section is missing, events will be json encoded. + +See <> for more information. + +[[console-output]] +=== Configure the Console output + +++++ +Console +++++ + +The Console output writes events in JSON format to stdout. + +[source,yaml] +------------------------------------------------------------------------------ +output.console: + pretty: true +------------------------------------------------------------------------------ + +==== Configuration options + +You can specify the following options in the `console` section of the +{beatname_lc}.yml+ config file: + +===== `pretty` + +If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. + +===== `codec` + +Output codec configuration. If the `codec` section is missing, events will be json encoded using the `pretty` option. + +See <> for more information. + + +===== `enabled` + +The enabled config is a boolean setting to enable or disable the output. If set +to false, the output is disabled. + +The default value is true. + +===== `bulk_max_size` + +The maximum number of events to buffer internally during publishing. The default is 2048. + +Specifying a larger batch size may add some latency and buffering during publishing. However, for Console output, this +setting does not affect how events are published. + +Setting `bulk_max_size` to values less than or equal to 0 disables the +splitting of batches. When splitting is disabled, the queue decides on the +number of events to be contained in a batch. + +[[configuration-output-codec]] +=== Configure the output codec + +++++ +Output codec +++++ + +For outputs that do not require a specific encoding, you can change the encoding +by using the codec configuration. You can specify either the `json` or `format` +codec. By default the `json` codec is used. + +*`json.pretty`*: If `pretty` is set to true, events will be nicely formatted. The default is false. + +Example configuration that uses the `json` codec with pretty printing enabled to write events to the console: + +[source,yaml] +------------------------------------------------------------------------------ +output.console: + codec.json: + pretty: true +------------------------------------------------------------------------------ + +*`format.string`*: Configurable format string used to create a custom formatted message. + +Example configurable that uses the `format` codec to print the events timestamp and message field to console: + +[source,yaml] +------------------------------------------------------------------------------ +output.console: + codec.format: + string: '%{[@timestamp]} %{[message]}' +------------------------------------------------------------------------------ + +[[configure-cloud-id]] +=== Configure the output for the Elastic Cloud + +++++ +Cloud +++++ + +{beatname_uc} comes with two settings that simplify the output configuration +when used together with https://cloud.elastic.co/[Elastic Cloud]. When defined, +these setting overwrite settings from other parts in the configuration. + +Example: + +["source","yaml",subs="attributes"] +------------------------------------------------------------------------------ +cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" +cloud.auth: "elastic:{pwd}" +------------------------------------------------------------------------------ + +These settings can be also specified at the command line, like this: + + +["source","sh",subs="attributes"] +------------------------------------------------------------------------------ +{beatname_lc} -e -E cloud.id="" -E cloud.auth="" +------------------------------------------------------------------------------ + + +==== `cloud.id` + +The Cloud ID, which can be found in the Elastic Cloud web console, is used by +{beatname_uc} to resolve the Elasticsearch and Kibana URLs. This setting +overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. + +==== `cloud.auth` + +When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and +`output.elasticsearch.password` settings. Because the Kibana settings inherit +the username and password from the Elasticsearch output, this can also be used +to set the `setup.kibana.username` and `setup.kibana.password` options. + +endif::[] diff --git a/docs/copied-from-beats/shared-configuring.asciidoc b/docs/copied-from-beats/shared-configuring.asciidoc new file mode 100644 index 00000000000..b1d6695a39b --- /dev/null +++ b/docs/copied-from-beats/shared-configuring.asciidoc @@ -0,0 +1,26 @@ +//Added conditional coding to support Beats that don't offer all of these install options + +ifeval::["{beatname_lc}"!="auditbeat"] + +To configure {beatname_uc}, you edit the configuration file. For rpm and deb, +you'll find the configuration file at +/etc/{beatname_lc}/{beatname_lc}.yml+. Under +Docker, it's located at +/usr/share/{beatname_lc}/{beatname_lc}.yml+. For mac and win, +look in the archive that you just extracted. There’s also a full example +configuration file called +{beatname_lc}.reference.yml+ that shows all non-deprecated +options. + +endif::[] + +ifeval::["{beatname_lc}"=="auditbeat"] + +To configure {beatname_uc}, you edit the configuration file. For rpm and deb, +you'll find the configuration file at +/etc/{beatname_lc}/{beatname_lc}.yml+. +For mac and win, look in the archive that you just extracted. There’s also a +full example configuration file called +{beatname_lc}.reference.yml+ that shows +all non-deprecated options. + +endif::[] + +See the +{libbeat}/config-file-format.html[Config File Format] section of the +_Beats Platform Reference_ for more about the structure of the config file. diff --git a/docs/copied-from-beats/shared-kibana-config.asciidoc b/docs/copied-from-beats/shared-kibana-config.asciidoc new file mode 100644 index 00000000000..f9a0a5bc78f --- /dev/null +++ b/docs/copied-from-beats/shared-kibana-config.asciidoc @@ -0,0 +1,108 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc. +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/shared-kibana-config.asciidoc[] +////////////////////////////////////////////////////////////////////////// + +[[setup-kibana-endpoint]] +== Set up the Kibana endpoint + +Starting with Beats 6.0.0, the Kibana dashboards are loaded into Kibana +via the Kibana API. This requires a Kibana endpoint configuration. + +You configure the endpoint in the `setup.kibana` section of the ++{beatname_lc}.yml+ config file. + +Here is an example configuration: + +[source,yaml] +---- +setup.kibana.host: "localhost:5601" +---- + +[float] +=== Configuration options + +You can specify the following options in the `setup.kibana` section of the ++{beatname_lc}.yml+ config file: + +[float] +==== `setup.kibana.host` + +The Kibana host where the dashboards will be loaded. The default is +`127.0.0.1:5601`. The value of `host` can be a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `192:15.3.2:5601` or `http://192.15.3.2:6701/path`. If no +port is specified, `5601` is used. + +NOTE: When a node is defined as an `IP:PORT`, the _scheme_ and _path_ are taken +from the <> and +<> config options. + +IPv6 addresses must be defined using the following format: +`https://[2001:db8::1]:5601`. + +[float] +[[kibana-protocol-option]] +==== `setup.kibana.protocol` + +The name of the protocol Kibana is reachable on. The options are: `http` or +`https`. The default is `http`. However, if you specify a URL for host, the +value of `protocol` is overridden by whatever scheme you specify in the URL. + +Example config: + +[source,yaml] +---- +setup.kibana.host: "192.0.2.255:5601" +setup.kibana.protocol: "https" +setup.kibana.path: /kibana +---- + + +[float] +==== `setup.kibana.username` + +The basic authentication username for connecting to Kibana. If you don't +specify a value for this setting, {beatname_uc} uses the `username` specified +for the Elasticsearch output. + +[float] +==== `setup.kibana.password` + +The basic authentication password for connecting to Kibana. If you don't +specify a value for this setting, {beatname_uc} uses the `password` specified +for the Elasticsearch output. + +[float] +[[kibana-path-option]] +==== `setup.kibana.path` + +An HTTP path prefix that is prepended to the HTTP API calls. This is useful for +the cases where Kibana listens behind an HTTP reverse proxy that exports the API +under a custom prefix. + +[float] +==== `setup.kibana.ssl.enabled` + +Enables {beatname_uc} to use SSL settings when connecting to Kibana via HTTPS. +If you configure {beatname_uc} to connect over HTTPS, this setting defaults to +`true` and {beatname_uc} uses the default SSL settings. + +Example configuration: + +[source,yaml] +---- +setup.kibana.host: "192.0.2.255:5601" +setup.kibana.protocol: "https" +setup.kibana.ssl.enabled: true +setup.kibana.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] +setup.kibana.ssl.certificate: "/etc/pki/client/cert.pem" +setup.kibana.ssl.key: "/etc/pki/client/cert.key +---- + +See <> for more information. + diff --git a/docs/copied-from-beats/shared-ssl-config.asciidoc b/docs/copied-from-beats/shared-ssl-config.asciidoc new file mode 100644 index 00000000000..e813bedf909 --- /dev/null +++ b/docs/copied-from-beats/shared-ssl-config.asciidoc @@ -0,0 +1,170 @@ +[[configuration-ssl]] +== Specify SSL settings + +You can specify SSL options for any <> that supports +SSL. You can also specify SSL options when you +<>. + +Example output config with SSL enabled: + +[source,yaml] +---- +output.elasticsearch.hosts: ["192.168.1.42:9200"] +output.elasticsearch.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] +output.elasticsearch.ssl.certificate: "/etc/pki/client/cert.pem" +output.elasticsearch.ssl.key: "/etc/pki/client/cert.key" +---- + +ifndef::only-elasticsearch[] +Also see <>. +endif::[] + +Example Kibana endpoint config with SSL enabled: + +[source,yaml] +---- +setup.kibana.host: "192.0.2.255:5601" +setup.kibana.protocol: "https" +setup.kibana.ssl.enabled: true +setup.kibana.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] +setup.kibana.ssl.certificate: "/etc/pki/client/cert.pem" +setup.kibana.ssl.key: "/etc/pki/client/cert.key +---- + +[float] +=== Configuration options + +You can specify the following options in the `ssl` section of the +{beatname_lc}.yml+ config file: + +[float] +==== `enabled` + +The `enabled` setting can be used to disable the ssl configuration by setting +it to `false`. The default value is `true`. + +NOTE: SSL settings are disabled if either `enabled` is set to `false` or the +`ssl` section is missing. + +[float] +==== `certificate_authorities` + +The list of root certificates for server verifications. If `certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used. + +[float] +[[certificate]] +==== `certificate: "/etc/pki/client/cert.pem"` + +The path to the certificate for SSL client authentication. If the certificate +is not specified, client authentication is not available. The connection +might fail if the server requests client authentication. If the SSL server does not +require client authentication, the certificate will be loaded, but not requested or used +by the server. + +When this option is configured, the <> option is also required. + +[float] +[[certificate_key]] +==== `key: "/etc/pki/client/cert.key"` + +The client certificate key used for client authentication. This option is required if <> is specified. + +[float] +==== `key_passphrase` + +The passphrase used to decrypt an encrypted key stored in the configured `key` file. + +[float] +==== `supported_protocols` + +List of allowed SSL/TLS versions. If SSL/TLS server decides for protocol versions +not configured, the connection will be dropped during or after the handshake. The +setting is a list of allowed protocol versions: +`SSLv3`, `TLSv1` for TLS version 1.0, `TLSv1.0`, `TLSv1.1` and `TLSv1.2`. + +The default value is `[TLSv1.0, TLSv1.1, TLSv1.2]`. + +[float] +==== `verification_mode` + +This option controls whether the client verifies server certificates and host +names. Valid values are `none` and `full`. If `verification_mode` is set +to `none`, all server host names and certificates are accepted. In this mode, +TLS-based connections are susceptible to man-in-the-middle attacks. Use this +option for testing only. + +The default is `full`. + +[float] +==== `cipher_suites` + +The list of cipher suites to use. The first entry has the highest priority. +If this option is omitted, the Go crypto library's default +suites are used (recommended). + +The following cipher suites are available: + +* RSA-RC4-128-SHA (disabled by default - RC4 not recommended) +* RSA-3DES-CBC3-SHA +* RSA-AES-128-CBC-SHA +* RSA-AES-256-CBC-SHA +* ECDHE-ECDSA-RC4-128-SHA (disabled by default - RC4 not recommended) +* ECDHE-ECDSA-AES-128-CBC-SHA +* ECDHE-ECDSA-AES-256-CBC-SHA +* ECDHE-RSA-RC4-128-SHA (disabled by default- RC4 not recommended) +* ECDHE-RSA-3DES-CBC3-SHA +* ECDHE-RSA-AES-128-CBC-SHA +* ECDHE-RSA-AES-256-CBC-SHA +* ECDHE-RSA-AES-128-GCM-SHA256 (TLS 1.2 only) +* ECDHE-ECDSA-AES-128-GCM-SHA256 (TLS 1.2 only) +* ECDHE-RSA-AES-256-GCM-SHA384 (TLS 1.2 only) +* ECDHE-ECDSA-AES-256-GCM-SHA384 (TLS 1.2 only) + +Here is a list of acronyms used in defining the cipher suites: + +* 3DES: + Cipher suites using triple DES + +* AES-128/256: + Cipher suites using AES with 128/256-bit keys. + +* CBC: + Cipher using Cipher Block Chaining as block cipher mode. + +* ECDHE: + Cipher suites using Elliptic Curve Diffie-Hellman (DH) ephemeral key exchange. + +* ECDSA: + Cipher suites using Elliptic Curve Digital Signature Algorithm for authentication. + +* GCM: + Galois/Counter mode is used for symmetric key cryptography. + +* RC4: + Cipher suites using RC4. + +* RSA: + Cipher suites using RSA. + +* SHA, SHA256, SHA384: + Cipher suites using SHA-1, SHA-256 or SHA-384. + +[float] +==== `curve_types` + +The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). + +The following elliptic curve types are available: + +* P-256 +* P-384 +* P-521 + +[float] +==== `renegotiation` + +This configures what types of TLS renegotiation are supported. The valid options +are `never`, `once`, and `freely`. The default value is never. + +* `never` - Disables renegotiation. +* `once` - Allows a remote server to request renegotiation once per connection. +* `freely` - Allows a remote server to repeatedly request renegotiation. diff --git a/docs/copied-from-beats/shared-template-load.asciidoc b/docs/copied-from-beats/shared-template-load.asciidoc new file mode 100644 index 00000000000..9ca7d0e0217 --- /dev/null +++ b/docs/copied-from-beats/shared-template-load.asciidoc @@ -0,0 +1,254 @@ +////////////////////////////////////////////////////////////////////////// +//// This content is shared by all Elastic Beats. Make sure you keep the +//// descriptions here generic enough to work for all Beats that include +//// this file. When using cross references, make sure that the cross +//// references resolve correctly for any files that include this one. +//// Use the appropriate variables defined in the index.asciidoc file to +//// resolve Beat names: beatname_uc and beatname_lc +//// Use the following include to pull this content into a doc file: +//// include::../../libbeat/docs/shared-template-load.asciidoc[] +//// If you want to include conditional content, you also need to +//// add the following doc attribute definition before the +//// include statement so that you have: +//// :allplatforms: +//// include::../../libbeat/docs/shared-template-load.asciidoc[] +//// This content must be embedded underneath a level 3 heading. +////////////////////////////////////////////////////////////////////////// + + +In Elasticsearch, {elasticsearch}/indices-templates.html[index +templates] are used to define settings and mappings that determine how fields +should be analyzed. + +The recommended index template file for {beatname_uc} is installed by the +{beatname_uc} packages. If you accept the default configuration in the ++{beatname_lc}.yml+ config file, {beatname_uc} loads the template automatically +after successfully connecting to Elasticsearch. If the template already exists, +it's not overwritten unless you configure {beatname_uc} to do so. + +You can disable automatic template loading, or load your own template, by +configuring template loading options in the {beatname_uc} configuration file. + +You can also set options to change the name of the index and index template. + +ifndef::only-elasticsearch[] +NOTE: A connection to Elasticsearch is required to load the index template. If +the output is Logstash, you must +<>. +endif::[] + +For more information, see: + +ifdef::only-elasticsearch[] +* <> +* <> +endif::[] + +ifndef::only-elasticsearch[] +* <> +* <> - required for Logstash output +endif::[] + +[[load-template-auto]] +==== Configure template loading + +By default, {beatname_uc} automatically loads the recommended template file, ++fields.yml+, if the Elasticsearch output is enabled. You can change the +defaults in the +{beatname_lc}.yml+ config file to: + +* **Load a different template** ++ +[source,yaml] +----- +setup.template.name: "your_template_name" +setup.template.fields: "path/to/fields.yml" +----- ++ +If the template already exists, it’s not overwritten unless you configure +{beatname_uc} to do so. + +* **Overwrite an existing template** ++ +[source,yaml] +----- +setup.template.overwrite: true +----- + +* **Disable automatic template loading** ++ +[source,yaml] +----- +setup.template.enabled: false +----- ++ +If you disable automatic template loading, you need to +<>. + +* **Change the index name** ++ +By default, {beatname_uc} writes events to indices named ++{beatname_lc}-{version}-yyyy.MM.dd+, where `yyyy.MM.dd` is the date when the +events were indexed. To use a different name, you set the +<> option in the Elasticsearch output. The value +that you specify should include the root name of the index plus version and +date information. You also need to configure the `setup.template.name` and +`setup.template.pattern` options to match the new name. For example: ++ +["source","sh",subs="attributes,callouts"] +----- +output.elasticsearch.index: "customname-%{[beat.version]}-%{+yyyy.MM.dd}" +setup.template.name: "customname" +setup.template.pattern: "customname-*" +setup.dashboards.index: "customname-*" <1> +----- +<1> If you plan to +<>, also set +this option to overwrite the index name defined in the dashboards and index +pattern. + +See <> for the full list of configuration options. + + +[[load-template-manually]] +==== Load the template manually + +To load the template manually, run the <> command. A +connection to Elasticsearch is required. If Logstash output is enabled, you need +to temporarily disable the Logstash output and enable Elasticsearch by using the +`-E` option. The examples here assume that Logstash output is enabled. You can +omit the `-E` flags if Elasticsearch output is already enabled. + +If you are connecting to a secured Elasticsearch cluster, make sure you've +configured credentials as described in <<{beatname_lc}-configuration>>. + +If the host running {beatname_uc} does not have direct connectivity to +Elasticsearch, see <>. + +To load the template, use the appropriate command for your system. + +ifdef::allplatforms[] + +ifeval::["{requires-sudo}"=="yes"] + +include::./shared-note-sudo.asciidoc[] + +endif::[] + +*deb and rpm:* + +["source","sh",subs="attributes"] +---- +{beatname_lc} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +---- + +*mac:* + +["source","sh",subs="attributes"] +---- +./{beatname_lc} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +---- + + +ifeval::["{beatname_lc}"!="auditbeat"] + +*docker:* + +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +docker run {dockerimage} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +---------------------------------------------------------------------- + + +endif::[] + +*win:* + +endif::allplatforms[] + +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon +and select *Run As Administrator*). If you are running Windows XP, you may need +to download and install PowerShell. + +From the PowerShell prompt, change to the directory where you installed {beatname_uc}, +and run: + +["source","sh",subs="attributes,callouts"] +---------------------------------------------------------------------- +PS > {beatname_lc} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +---------------------------------------------------------------------- + + +[[force-kibana-new]] +===== Force Kibana to look at newest documents + +If you've already used {beatname_uc} to index data into Elasticsearch, +the index may contain old documents. After you load the index template, +you can delete the old documents from +{beatname_lc}-*+ to force Kibana to look +at the newest documents. Use this command: + +*deb, rpm, and mac:* + +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +curl -XDELETE 'http://localhost:9200/{beatname_lc}-*' +---------------------------------------------------------------------- + +*win:* + +["source","sh",subs="attributes"] +---------------------------------------------------------------------- +PS > Invoke-RestMethod -Method Delete "http://localhost:9200/{beatname_lc}-*" +---------------------------------------------------------------------- + + +This command deletes all indices that match the pattern +{beatname_lc}-*+. +Before running this command, make sure you want to delete all indices that match +the pattern. + +[[load-template-manually-alternate]] +==== Load the template manually (alternate method) + +If the host running {beatname_uc} does not have direct connectivity to +Elasticsearch, you can export the index template to a file, move it to a +machine that does have connectivity, and then install the template manually. + +. Export the index template: ++ +ifdef::allplatforms[] +*deb and rpm:* ++ +["source","sh",subs="attributes"] +---- +{beatname_lc} export template > {beatname_lc}.template.json +---- ++ +*mac:* ++ +["source","sh",subs="attributes"] +---- +./{beatname_lc} export template > {beatname_lc}.template.json +---- ++ +*win*: ++ +endif::allplatforms[] +["source","sh",subs="attributes"] +---- +PS> .{backslash}{beatname_lc}.exe export template --es.version {stack-version} | Out-File -Encoding UTF8 {beatname_lc}.template.json +---- + +. Install the template: ++ +*deb, rpm, and mac:* ++ +["source","sh",subs="attributes"] +---- +curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_template/{beatname_lc}-{stack-version} -d@{beatname_lc}.template.json +---- ++ +*win*: ++ +["source","sh",subs="attributes"] +---- +PS > Invoke-RestMethod -Method Put -ContentType "application/json" -InFile {beatname_lc}.template.json -Uri http://localhost:9200/_template/{beatname_lc}-{stack-version} +---- diff --git a/docs/copied-from-beats/template-config.asciidoc b/docs/copied-from-beats/template-config.asciidoc new file mode 100644 index 00000000000..fbe079918f3 --- /dev/null +++ b/docs/copied-from-beats/template-config.asciidoc @@ -0,0 +1,84 @@ +[[configuration-template]] + +== Load the Elasticsearch index template + +The `setup.template` section of the +{beatname_lc}.yml+ config file specifies +the {elasticsearch}/indices-templates.html[index template] to use for setting +mappings in Elasticsearch. If template loading is enabled (the default), +{beatname_uc} loads the index template automatically after successfully +connecting to Elasticsearch. + +ifndef::only-elasticsearch[] + +NOTE: A connection to Elasticsearch is required to load the index template. If +the output is Logstash, you must <>. + +endif::[] + +You can adjust the following settings to load your own template or overwrite an +existing one. + +*`setup.template.enabled`*:: Set to false to disable template loading. If set this to false, +you must <>. + +*`setup.template.name`*:: The name of the template. The default is ++{beatname_lc}+. The {beatname_uc} version is always appended to the given +name, so the final name is +{beatname_lc}-%\{[beat.version]\}+. + +// Maintainers: a backslash character is required to escape curly braces and +// asterisks in inline code examples that contain asciidoc attributes. You'll +// note that a backslash does not appear before the asterisk +// in +{beatname_lc}-%\{[beat.version]\}-*+. This is intentional and formats +// the example as expected. + +*`setup.template.pattern`*:: The template pattern to apply to the default index +settings. The default pattern is +{beatname_lc}-\*+. The {beatname_uc} version is always +included in the pattern, so the final pattern is ++{beatname_lc}-%\{[beat.version]\}-*+. The wildcard character `-*` is used to +match all daily indices. ++ +Example: ++ +["source","yaml",subs="attributes"] +---------------------------------------------------------------------- +setup.template.name: "{beatname_lc}" +setup.template.pattern: "{beatname_lc}-*" +---------------------------------------------------------------------- + +*`setup.template.fields`*:: The path to the YAML file describing the fields. The default is +fields.yml+. If a +relative path is set, it is considered relative to the config path. See the <> +section for details. + +*`setup.template.overwrite`*:: A boolean that specifies whether to overwrite the existing template. The default +is false. + +*`setup.template.settings`*:: A dictionary of settings to place into the `settings.index` dictionary of the +Elasticsearch template. For more details about the available Elasticsearch mapping options, please +see the Elasticsearch {elasticsearch}/mapping.html[mapping reference]. ++ +Example: ++ +["source","yaml",subs="attributes"] +---------------------------------------------------------------------- +setup.template.name: "{beatname_lc}" +setup.template.fields: "fields.yml" +setup.template.overwrite: false +setup.template.settings: + index.number_of_shards: 1 + index.number_of_replicas: 1 +---------------------------------------------------------------------- + +*`setup.template.settings._source`*:: A dictionary of settings for the `_source` field. For the available settings, +please see the Elasticsearch {elasticsearch}/mapping-source-field.html[reference]. ++ +Example: ++ +["source","yaml",subs="attributes"] +---------------------------------------------------------------------- +setup.template.name: "{beatname_lc}" +setup.template.fields: "fields.yml" +setup.template.overwrite: false +setup.template.settings: + _source.enabled: false +---------------------------------------------------------------------- diff --git a/docs/index.asciidoc b/docs/index.asciidoc index 94c714ef51e..b31ffbdb674 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -5,6 +5,7 @@ include::{asciidoc-dir}/../../shared/attributes.asciidoc[] :version: {stack-version} :beatname_lc: apm-server :beatname_uc: APM Server +:beat_default_index_prefix: apm :beatname_pkg: {beatname_lc} :dockerimage: docker.elastic.co/apm/{beatname_lc}:{version} :dockergithub: https://github.com/elastic/apm-server-docker/tree/{doc-branch} diff --git a/docs/overview.asciidoc b/docs/overview.asciidoc index 5dc1184e8ab..1f20f570d58 100644 --- a/docs/overview.asciidoc +++ b/docs/overview.asciidoc @@ -29,5 +29,5 @@ and as such it shares many of the same configuration options as beats. In the following you can read more about * <> -* <> +* <> diff --git a/docs/setting-up-and-running.asciidoc b/docs/setting-up-and-running.asciidoc index 628838a9377..019d5945f11 100644 --- a/docs/setting-up-and-running.asciidoc +++ b/docs/setting-up-and-running.asciidoc @@ -56,9 +56,10 @@ include::./high-availability.asciidoc[] include::./security.asciidoc[] +include::./dashboards.asciidoc[] + include::./copied-from-beats/command-reference.asciidoc[] include::./copied-from-beats/shared-directory-layout.asciidoc[] - include::./copied-from-beats/running-on-docker.asciidoc[] From 45fa77bff8a72f12408ad3c7dbf24836bf42775d Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 25 Jan 2018 20:40:25 +0100 Subject: [PATCH 03/31] Add missing files dashboards and template-config. --- docs/dashboards.asciidoc | 5 +++++ docs/template-config.asciidoc | 5 +++++ 2 files changed, 10 insertions(+) create mode 100644 docs/dashboards.asciidoc create mode 100644 docs/template-config.asciidoc diff --git a/docs/dashboards.asciidoc b/docs/dashboards.asciidoc new file mode 100644 index 00000000000..cf10ae8f515 --- /dev/null +++ b/docs/dashboards.asciidoc @@ -0,0 +1,5 @@ +[[load-kibana-dashboards]] +=== Dashboards + + +include::./copied-from-beats/dashboards.asciidoc[] \ No newline at end of file diff --git a/docs/template-config.asciidoc b/docs/template-config.asciidoc new file mode 100644 index 00000000000..1a9b2ca065f --- /dev/null +++ b/docs/template-config.asciidoc @@ -0,0 +1,5 @@ +include::./copied-from-beats/template-config.asciidoc[] + +=== Manually loading template configuration + +include::./copied-from-beats/shared-template-load.asciidoc[] From f5eb1cdee1dbd067b2a81862cf9b0215e02f73e6 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 25 Jan 2018 21:30:09 +0100 Subject: [PATCH 04/31] -inging headlines like before. --- docs/configuring.asciidoc | 2 +- docs/setting-up-and-running.asciidoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index 59a1f5cdbe4..08972d01b36 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -1,5 +1,5 @@ [[apm-server-configuration]] -= Configure APM Server += Configuring APM Server [partintro] -- diff --git a/docs/setting-up-and-running.asciidoc b/docs/setting-up-and-running.asciidoc index 019d5945f11..e79d6281c77 100644 --- a/docs/setting-up-and-running.asciidoc +++ b/docs/setting-up-and-running.asciidoc @@ -1,6 +1,6 @@ [[setting-up-and-running]] -== Set up and run APM Server +== Setting up and running APM Server In a production environment, you would put APM Server on its own machines, From d00e155c68fe58d30f95e5a05681ccd208905c3f Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 25 Jan 2018 22:35:18 +0100 Subject: [PATCH 05/31] Update TOC for Configuring APM Server --- docs/configuring.asciidoc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index 08972d01b36..233843e298a 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -7,7 +7,12 @@ include::./copied-from-beats/shared-configuring.asciidoc[] The following explains how to configure APM Server: +* <> +* <> +* <> * <> +* <> +* <> * <> * <> -- From 518a9d9f5975f14214d8a0ffd13ae8cc0877fb3a Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 14:19:39 +0100 Subject: [PATCH 06/31] Updated following https://github.com/elastic/beats/pull/6184/commits/a58c36abb3716641d3089a498a912046aa88d5a5 --- docs/copied-from-beats/dashboardsconfig.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/copied-from-beats/dashboardsconfig.asciidoc b/docs/copied-from-beats/dashboardsconfig.asciidoc index 03a24c41c5d..bdc30c3a3da 100644 --- a/docs/copied-from-beats/dashboardsconfig.asciidoc +++ b/docs/copied-from-beats/dashboardsconfig.asciidoc @@ -20,7 +20,7 @@ To load the dashboards, you can either enable dashboard loading in the run the `setup` command. Dashboard loading is disabled by default. When dashboard loading is enabled, {beatname_uc} uses the Kibana API to load the -sample dashboards. Dashboard loading is only attempted at Beat startup. +sample dashboards. Dashboard loading is only attempted when {beatname_uc} starts up. If Kibana is not available at startup, {beatname_uc} will stop with an error. To enable dashboard loading, add the following setting to the config file: From 3ef09a67dbc663bc41d41c3c4c15f33e8bc7f7e6 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 14:19:49 +0100 Subject: [PATCH 07/31] Added "setting up and running" to the overview TOC. --- docs/overview.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/overview.asciidoc b/docs/overview.asciidoc index 1f20f570d58..9c709fbd54b 100644 --- a/docs/overview.asciidoc +++ b/docs/overview.asciidoc @@ -29,5 +29,5 @@ and as such it shares many of the same configuration options as beats. In the following you can read more about * <> +* <> * <> - From 9a8a6d1b6c4e50211d0ee5df042a82af186f3986 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 14:27:37 +0100 Subject: [PATCH 08/31] Updated following https://github.com/elastic/beats/pull/6186/commits/cf4c3b96233c36b0942d59849943bc505da0bcb6 --- docs/copied-from-beats/loggingconfig.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/copied-from-beats/loggingconfig.asciidoc b/docs/copied-from-beats/loggingconfig.asciidoc index c2af37a205a..7f7f75cc91a 100644 --- a/docs/copied-from-beats/loggingconfig.asciidoc +++ b/docs/copied-from-beats/loggingconfig.asciidoc @@ -11,7 +11,7 @@ ////////////////////////////////////////////////////////////////////////// [[configuration-logging]] -== Set up logging +== Configure logging The `logging` section of the +{beatname_lc}.yml+ config file contains options for configuring the Beats logging output. The logging system can write logs to From d228c8dd533aa88a712a734b23e08025ec26347e Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 14:59:21 +0100 Subject: [PATCH 09/31] Updated command reference following https://github.com/elastic/beats/pull/6193/commits/2584eba38050017291f6436e1a888be11536b2e0 --- docs/copied-from-beats/command-reference.asciidoc | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/copied-from-beats/command-reference.asciidoc b/docs/copied-from-beats/command-reference.asciidoc index 248219ca91c..9974937ac02 100644 --- a/docs/copied-from-beats/command-reference.asciidoc +++ b/docs/copied-from-beats/command-reference.asciidoc @@ -524,7 +524,7 @@ For example: + ["source","sh",subs="attributes"] ---------------------------------------------------------------------- -{beatname_lc} -E "name=mybeat" -E "output.elasticsearch.hosts=["http://myhost:9200"]" +{beatname_lc} -E "name=mybeat" -E "output.elasticsearch.hosts=['http://myhost:9200']" ---------------------------------------------------------------------- + This setting is applied to the currently running {beatname_uc} process. @@ -537,7 +537,7 @@ ifeval::["{beatname_lc}"=="filebeat"] + ["source","sh",subs="attributes"] ---------------------------------------------------------------------- -{beatname_lc} -modules=nginx -M "nginx.access.var.paths=[/var/log/nginx/access.log*]" -M "nginx.access.var.pipeline=no_plugins" +{beatname_lc} -modules=nginx -M "nginx.access.var.paths=['/var/log/nginx/access.log*']" -M "nginx.access.var.pipeline=no_plugins" ---------------------------------------------------------------------- endif::[] @@ -579,3 +579,4 @@ the _Beats Platform Reference_ for more information. *`-v, --v`*:: Logs INFO-level messages. + From 7f5cedd4c2a9f646424a4fbaff1ac2cd5272b8e8 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 15:24:26 +0100 Subject: [PATCH 10/31] Update SSL settings headline --- docs/copied-from-beats/shared-ssl-config.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/copied-from-beats/shared-ssl-config.asciidoc b/docs/copied-from-beats/shared-ssl-config.asciidoc index e813bedf909..c1111d83d6c 100644 --- a/docs/copied-from-beats/shared-ssl-config.asciidoc +++ b/docs/copied-from-beats/shared-ssl-config.asciidoc @@ -1,5 +1,5 @@ [[configuration-ssl]] -== Specify SSL settings +== SSL settings for outputs You can specify SSL options for any <> that supports SSL. You can also specify SSL options when you From 764b69c0deaf4b6233f7c9772e9449ee210081e7 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 15:45:39 +0100 Subject: [PATCH 11/31] Updated template config following a beats update --- docs/copied-from-beats/template-config.asciidoc | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/copied-from-beats/template-config.asciidoc b/docs/copied-from-beats/template-config.asciidoc index fbe079918f3..6b82e1e0856 100644 --- a/docs/copied-from-beats/template-config.asciidoc +++ b/docs/copied-from-beats/template-config.asciidoc @@ -23,7 +23,7 @@ existing one. you must <>. *`setup.template.name`*:: The name of the template. The default is -+{beatname_lc}+. The {beatname_uc} version is always appended to the given ++{beatname_lc}-*+. The {beatname_uc} version is always appended to the given name, so the final name is +{beatname_lc}-%\{[beat.version]\}+. // Maintainers: a backslash character is required to escape curly braces and @@ -33,9 +33,9 @@ name, so the final name is +{beatname_lc}-%\{[beat.version]\}+. // the example as expected. *`setup.template.pattern`*:: The template pattern to apply to the default index -settings. The default pattern is +{beatname_lc}-\*+. The {beatname_uc} version is always +settings. The default pattern is +{beat_default_index_prefix}-\*+. The {beatname_uc} version is always included in the pattern, so the final pattern is -+{beatname_lc}-%\{[beat.version]\}-*+. The wildcard character `-*` is used to ++{beat_default_index_prefix}-%\{[beat.version]\}-*+. The wildcard character `-*` is used to match all daily indices. + Example: @@ -43,7 +43,7 @@ Example: ["source","yaml",subs="attributes"] ---------------------------------------------------------------------- setup.template.name: "{beatname_lc}" -setup.template.pattern: "{beatname_lc}-*" +setup.template.pattern: "{beat_default_index_prefix}-*" ---------------------------------------------------------------------- *`setup.template.fields`*:: The path to the YAML file describing the fields. The default is +fields.yml+. If a From 63f5379378044c49cfa9fd241695917274a71ae7 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 15:48:15 +0100 Subject: [PATCH 12/31] Logging config update. --- docs/copied-from-beats/loggingconfig.asciidoc | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/docs/copied-from-beats/loggingconfig.asciidoc b/docs/copied-from-beats/loggingconfig.asciidoc index 7f7f75cc91a..94536f34c38 100644 --- a/docs/copied-from-beats/loggingconfig.asciidoc +++ b/docs/copied-from-beats/loggingconfig.asciidoc @@ -14,7 +14,7 @@ == Configure logging The `logging` section of the +{beatname_lc}.yml+ config file contains options -for configuring the Beats logging output. The logging system can write logs to +for configuring the logging output. The logging system can write logs to the syslog or rotate log files. If logging is not explicitly configured the file output is used. @@ -67,7 +67,7 @@ Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default log level is `info`. `debug`:: Logs debug messages, including a detailed printout of all events -flushed by the Beat. Also logs informational messages, warnings, errors, and +flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is `debug`, you can specify a list of <> to display debug messages for specific components. If no selectors are specified, the `*` selector is used to display debug messages @@ -84,9 +84,9 @@ published. Also logs any warnings, errors, or critical errors. [[selectors]] ==== `logging.selectors` -The list of debugging-only selector tags used by different Beats components. Use `*` -to enable debug output for all components. For example add `publish` to display -all the debug messages related to event publishing. When starting the Beat, +The list of debugging-only selector tags used by different {beatname_uc} components. +Use `*` to enable debug output for all components. For example add `publish` to display +all the debug messages related to event publishing. When starting {beatname_lc}, selectors can be overwritten using the `-d` command line option (`-d` also sets the debug log level). @@ -123,8 +123,7 @@ the <> section for details. [float] ==== `logging.files.name` -The name of the file that logs are written to. By default, the name of the Beat -is used. +The name of the file that logs are written to. The default is '{beatname_lc}'. [float] ==== `logging.files.rotateeverybytes` @@ -181,4 +180,4 @@ Below are some samples: `2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:16 some message` -`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}` \ No newline at end of file +`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}` From 1aab0ef0d2c31f0618c6a4b13b3fb370f4aa40fa Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Fri, 26 Jan 2018 16:57:18 +0100 Subject: [PATCH 13/31] Use specific beat name in shared-kibana-config.asciidoc --- docs/copied-from-beats/shared-kibana-config.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/copied-from-beats/shared-kibana-config.asciidoc b/docs/copied-from-beats/shared-kibana-config.asciidoc index f9a0a5bc78f..4525cde26e6 100644 --- a/docs/copied-from-beats/shared-kibana-config.asciidoc +++ b/docs/copied-from-beats/shared-kibana-config.asciidoc @@ -12,7 +12,7 @@ [[setup-kibana-endpoint]] == Set up the Kibana endpoint -Starting with Beats 6.0.0, the Kibana dashboards are loaded into Kibana +Starting with {beatname_uc} 6.0.0, the Kibana dashboards are loaded into Kibana via the Kibana API. This requires a Kibana endpoint configuration. You configure the endpoint in the `setup.kibana` section of the From bebc104e50e6467d6f307ba9a12c150bbdbc19dd Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Tue, 30 Jan 2018 11:35:19 +0100 Subject: [PATCH 14/31] Add html_docs to .gitignore --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index ec68c4763ca..7ac68950c6d 100644 --- a/.gitignore +++ b/.gitignore @@ -18,3 +18,4 @@ /fields.yml /apm-server.template-es.json +html_docs From 294aeeea4424e43db2a6ebe04f1efd1ccb3854fc Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 11:24:12 +0100 Subject: [PATCH 15/31] Add .\ for PS instruction and make it possible to remove logstash mention. --- .../shared-template-load.asciidoc | 22 ++++++++++++++----- 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/docs/copied-from-beats/shared-template-load.asciidoc b/docs/copied-from-beats/shared-template-load.asciidoc index 9ca7d0e0217..3677a1091a1 100644 --- a/docs/copied-from-beats/shared-template-load.asciidoc +++ b/docs/copied-from-beats/shared-template-load.asciidoc @@ -113,10 +113,13 @@ See <> for the full list of configuration options. ==== Load the template manually To load the template manually, run the <> command. A -connection to Elasticsearch is required. If Logstash output is enabled, you need +connection to Elasticsearch is required. +ifndef::only-elasticsearch[] +If Logstash output is enabled, you need to temporarily disable the Logstash output and enable Elasticsearch by using the `-E` option. The examples here assume that Logstash output is enabled. You can omit the `-E` flags if Elasticsearch output is already enabled. +endif::[] If you are connecting to a secured Elasticsearch cluster, make sure you've configured credentials as described in <<{beatname_lc}-configuration>>. @@ -126,6 +129,14 @@ Elasticsearch, see <>. To load the template, use the appropriate command for your system. +ifndef::only-elasticsearch[] +:disable_logstash: {sp}-E output.logstash.enabled=false +endif::[] + +ifdef::only-elasticsearch[] +:disable_logstash: +endif::[] + ifdef::allplatforms[] ifeval::["{requires-sudo}"=="yes"] @@ -135,17 +146,16 @@ include::./shared-note-sudo.asciidoc[] endif::[] *deb and rpm:* - ["source","sh",subs="attributes"] ---- -{beatname_lc} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +{beatname_lc} setup --template{disable_logstash} -E 'output.elasticsearch.hosts=["localhost:9200"]' ---- *mac:* ["source","sh",subs="attributes"] ---- -./{beatname_lc} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +./{beatname_lc} setup --template{disable_logstash} -E 'output.elasticsearch.hosts=["localhost:9200"]' ---- @@ -155,7 +165,7 @@ ifeval::["{beatname_lc}"!="auditbeat"] ["source","sh",subs="attributes"] ---------------------------------------------------------------------- -docker run {dockerimage} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +docker run {dockerimage} setup --template{disable_logstash} -E 'output.elasticsearch.hosts=["localhost:9200"]' ---------------------------------------------------------------------- @@ -174,7 +184,7 @@ and run: ["source","sh",subs="attributes,callouts"] ---------------------------------------------------------------------- -PS > {beatname_lc} setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' +PS > .{backslash}{beatname_lc} setup --template{disable_logstash} -E 'output.elasticsearch.hosts=["localhost:9200"]' ---------------------------------------------------------------------- From bbfa6daac6115bf46c88710d5a0259b8880a8020 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 11:26:09 +0100 Subject: [PATCH 16/31] Switch kibana endpoint and kibana conf around --- docs/configuring.asciidoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index 233843e298a..b6991883579 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -11,8 +11,8 @@ The following explains how to configure APM Server: * <> * <> * <> -* <> * <> +* <> * <> * <> -- @@ -27,10 +27,10 @@ include::./template-config.asciidoc[] include::./copied-from-beats/loggingconfig.asciidoc[] -include::./copied-from-beats/dashboardsconfig.asciidoc[] - include::./copied-from-beats/shared-kibana-config.asciidoc[] +include::./copied-from-beats/dashboardsconfig.asciidoc[] + :standalone: include::./copied-from-beats/shared-env-vars.asciidoc[] From a0a482a61413387cfbb065e8802d680603cffb32 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 11:37:50 +0100 Subject: [PATCH 17/31] Introduce 'has_ml_jobs'. --- .../copied-from-beats/command-reference.asciidoc | 16 +++++++++++++--- docs/index.asciidoc | 3 ++- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/docs/copied-from-beats/command-reference.asciidoc b/docs/copied-from-beats/command-reference.asciidoc index 9974937ac02..1dd132deb78 100644 --- a/docs/copied-from-beats/command-reference.asciidoc +++ b/docs/copied-from-beats/command-reference.asciidoc @@ -18,7 +18,13 @@ :help-command-short-desc: Shows help for any command :modules-command-short-desc: Manages configured modules :run-command-short-desc: Runs {beatname_uc}. This command is used by default if you start {beatname_uc} without specifying a command + +ifeval::["{has_ml_jobs}"=="yes"] :setup-command-short-desc: Sets up the initial environment, including the index template, Kibana dashboards (when available), and machine learning jobs (when available) +else::[] +:setup-command-short-desc: Sets up the initial environment, including the index template, Kibana dashboards (when available) +endif::[] + :test-command-short-desc: Tests the configuration :version-command-short-desc: Shows information about the current version @@ -26,7 +32,7 @@ [[command-line-options]] === {beatname_uc} commands -{beatname_uc} provides a command-line interface for running the Beat and +{beatname_uc} provides a command-line interface for starting {beatname_uc} and performing common tasks, like testing configuration files and loading dashboards. The command-line also supports <> for controlling global behaviors. @@ -378,8 +384,10 @@ Or: * The index template ensures that fields are mapped correctly in Elasticsearch. * The Kibana dashboards make it easier for you to visualize {beatname_uc} data in Kibana. +ifeval::["{has_ml_jobs}"=="yes"] * The machine learning jobs contain the configuration information and metadata necessary to analyze data for anomalies. +endif::[] Use this command instead of `run --setup` when you want to set up the environment without actually running {beatname_uc} and ingesting data. @@ -400,8 +408,10 @@ Sets up the Kibana dashboards only. *`-h, --help`*:: Shows help for the `setup` command. +ifeval::["{has_ml_jobs}"=="yes"] *`--machine-learning`*:: Sets up machine learning job configurations only. +endif::[] ifeval::["{beatname_lc}"=="filebeat"] @@ -524,7 +534,7 @@ For example: + ["source","sh",subs="attributes"] ---------------------------------------------------------------------- -{beatname_lc} -E "name=mybeat" -E "output.elasticsearch.hosts=['http://myhost:9200']" +{beatname_lc} -E "name=mybeat" -E "output.elasticsearch.hosts=["http://myhost:9200"]" ---------------------------------------------------------------------- + This setting is applied to the currently running {beatname_uc} process. @@ -537,7 +547,7 @@ ifeval::["{beatname_lc}"=="filebeat"] + ["source","sh",subs="attributes"] ---------------------------------------------------------------------- -{beatname_lc} -modules=nginx -M "nginx.access.var.paths=['/var/log/nginx/access.log*']" -M "nginx.access.var.pipeline=no_plugins" +{beatname_lc} -modules=nginx -M "nginx.access.var.paths=[/var/log/nginx/access.log*]" -M "nginx.access.var.pipeline=no_plugins" ---------------------------------------------------------------------- endif::[] diff --git a/docs/index.asciidoc b/docs/index.asciidoc index b31ffbdb674..4420764687e 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -5,8 +5,9 @@ include::{asciidoc-dir}/../../shared/attributes.asciidoc[] :version: {stack-version} :beatname_lc: apm-server :beatname_uc: APM Server -:beat_default_index_prefix: apm :beatname_pkg: {beatname_lc} +:beat_default_index_prefix: apm +:has_ml_jobs: no :dockerimage: docker.elastic.co/apm/{beatname_lc}:{version} :dockergithub: https://github.com/elastic/apm-server-docker/tree/{doc-branch} From 86bb1345bea216e334eacb4d29a5597804568106 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 12:50:15 +0100 Subject: [PATCH 18/31] Less blamy text, removed 6.0 note, links to the correct place now --- docs/configuring.asciidoc | 2 +- docs/copied-from-beats/dashboards.asciidoc | 10 +++------- docs/overview.asciidoc | 2 +- docs/setting-up-and-running.asciidoc | 20 ++++++++++++++++++++ 4 files changed, 25 insertions(+), 9 deletions(-) diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index b6991883579..b7c8d1c2e6c 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -1,4 +1,4 @@ -[[apm-server-configuration]] +[[configuring-howto-apm-server]] = Configuring APM Server [partintro] diff --git a/docs/copied-from-beats/dashboards.asciidoc b/docs/copied-from-beats/dashboards.asciidoc index eb1f7959b68..3c76c4d91ca 100644 --- a/docs/copied-from-beats/dashboards.asciidoc +++ b/docs/copied-from-beats/dashboards.asciidoc @@ -9,20 +9,16 @@ //// include::../../libbeat/docs/dashboards.asciidoc[] ////////////////////////////////////////////////////////////////////////// - {beatname_uc} comes packaged with example Kibana dashboards, visualizations, and searches for visualizing {beatname_uc} data in Kibana. Before you can use -the dashboards, you need to create the index pattern, +{beat_default_index_prefix}-*+, and +the dashboards, you need to create the index pattern, +{beat_default_index_prefix}-*+, and load the dashboards into Kibana. To do this, you can either run the `setup` command (as described here) or <> in the +{beatname_lc}.yml+ config file. -NOTE: Starting with {beatname_uc} 6.0.0, the dashboards are loaded via the Kibana API. -This requires a Kibana endpoint configuration. You should have configured the -endpoint earlier when you -<<{beatname_lc}-configuration,configured {beatname_uc}>>. If you didn't, -configure it now. +This requires a Kibana endpoint configuration. If you didn't already configure +a Kibana endpoint, see <<{beatname_lc}-configuration,configured {beatname_uc}>> Make sure Kibana is running before you perform this step. If you are accessing a secured Kibana instance, make sure you've configured credentials as described in diff --git a/docs/overview.asciidoc b/docs/overview.asciidoc index 9c709fbd54b..cbf625da647 100644 --- a/docs/overview.asciidoc +++ b/docs/overview.asciidoc @@ -30,4 +30,4 @@ In the following you can read more about * <> * <> -* <> +* <> diff --git a/docs/setting-up-and-running.asciidoc b/docs/setting-up-and-running.asciidoc index e79d6281c77..3abe8c56ade 100644 --- a/docs/setting-up-and-running.asciidoc +++ b/docs/setting-up-and-running.asciidoc @@ -24,6 +24,8 @@ You can change the defaults by supplying a different address on the command line ./apm-server -e -E output.elasticsearch.hosts=ElasticsearchAddress:9200 -E apm-server.host=localhost:8200 ---------------------------------- +[[apm-server-configuration]] +=== Configuration file Or you can update the `apm-server.yml` configuration file to change the defaults. [source,yaml] @@ -49,6 +51,24 @@ output.elasticsearch: password: "elastic" ---- +If you plan to use the sample Kibana dashboards provided with {beatname_uc}, +configure the Kibana endpoint: + +[source,yaml] +---------------------------------------------------------------------- +setup.kibana: + host: "localhost:5601" +---------------------------------------------------------------------- + +-- +Where `host` is the hostname and port of the machine where Kibana is running, +for example, `localhost:5601`. + +NOTE: If you specify a path after the port number, you need to include +the scheme and port: `http://localhost:5601/path`. + +-- + See https://github.com/elastic/apm-server/blob/{doc-branch}/apm-server.reference.yml[`apm-server.reference.yml`] for more configuration options. From a405c89d39c2980bbc0530f3df837b28e78f9b79 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 12:50:34 +0100 Subject: [PATCH 19/31] better output config text for elasticsearch --- docs/copied-from-beats/outputconfig.asciidoc | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/copied-from-beats/outputconfig.asciidoc b/docs/copied-from-beats/outputconfig.asciidoc index 46729c24d63..84fd8b985d5 100644 --- a/docs/copied-from-beats/outputconfig.asciidoc +++ b/docs/copied-from-beats/outputconfig.asciidoc @@ -13,15 +13,18 @@ [[configuring-output]] == Configure the output +ifdef::only-elasticsearch[] +You configure {beatname_uc} to write to Elasticsearch by setting options in +the `output.elasticsearch` of the +{beatname_lc}.yml+ config file. +endif::[] + +ifndef::only-elasticsearch[] You configure {beatname_uc} to write to a specific output by setting options in the `output` section of the +{beatname_lc}.yml+ config file. Only a single output may be defined. The following topics describe how to configure each supported output: - * <> - -ifndef::only-elasticsearch[] * <> * <> * <> @@ -29,7 +32,6 @@ ifndef::only-elasticsearch[] * <> endif::[] - [[elasticsearch-output]] === Configure the Elasticsearch output From 7bb506dbcd9525c1b10d3a3e0d361bf1863e91f1 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 12:56:46 +0100 Subject: [PATCH 20/31] Better wording in configuration. --- docs/configuring.asciidoc | 2 +- docs/setting-up-and-running.asciidoc | 12 ++++++++++-- 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index b7c8d1c2e6c..43c3acafce7 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -5,7 +5,7 @@ -- include::./copied-from-beats/shared-configuring.asciidoc[] -The following explains how to configure APM Server: +The following topics describe how to configure APM Server: * <> * <> diff --git a/docs/setting-up-and-running.asciidoc b/docs/setting-up-and-running.asciidoc index 3abe8c56ade..e9caacbb35a 100644 --- a/docs/setting-up-and-running.asciidoc +++ b/docs/setting-up-and-running.asciidoc @@ -26,7 +26,15 @@ You can change the defaults by supplying a different address on the command line [[apm-server-configuration]] === Configuration file -Or you can update the `apm-server.yml` configuration file to change the defaults. +To configure APM Server, you can also update the `apm-server.yml` configuration file. + +For rpm and deb, +you’ll find the configuration file at +/etc/{beatname_lc}/{beatname_lc}.yml+. +There's also a full example configuration file at ++/etc/{beatname_lc}/{beatname_lc}.reference.yml+ that shows all non-deprecated +options. For mac and win, look in the archive that you extracted. + +See the _Beats Platform Reference_ for more about the structure of the config file. [source,yaml] ---------------------------------- @@ -69,7 +77,7 @@ the scheme and port: `http://localhost:5601/path`. -- -See https://github.com/elastic/apm-server/blob/{doc-branch}/apm-server.reference.yml[`apm-server.reference.yml`] for more configuration options. +See <> for more configuration options. include::./high-availability.asciidoc[] From 08401f8d4d1c26cf65ec3131798b7c39ef352aec Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 13:10:36 +0100 Subject: [PATCH 21/31] Only talk about Filebeat for filebeat docs --- docs/copied-from-beats/outputconfig.asciidoc | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/copied-from-beats/outputconfig.asciidoc b/docs/copied-from-beats/outputconfig.asciidoc index 84fd8b985d5..61bc89c2458 100644 --- a/docs/copied-from-beats/outputconfig.asciidoc +++ b/docs/copied-from-beats/outputconfig.asciidoc @@ -277,10 +277,13 @@ endif::[] The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. -Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all -events are published. +ifeval::["{beatname_lc}" == "filebeat"] +Filebeat will ignore the `max_retries` setting and retry until all +events are published. +ifeval::["{beatname_lc}" != "filebeat"] Set `max_retries` to a value less than 0 to retry until all events are published. +endif::[] The default is 3. From 295b4f2f893f72e32afbd8ab0d5e3a8159bcc09a Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 13:10:57 +0100 Subject: [PATCH 22/31] Fix list in output config --- docs/copied-from-beats/outputconfig.asciidoc | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/copied-from-beats/outputconfig.asciidoc b/docs/copied-from-beats/outputconfig.asciidoc index 61bc89c2458..44c27d67a6e 100644 --- a/docs/copied-from-beats/outputconfig.asciidoc +++ b/docs/copied-from-beats/outputconfig.asciidoc @@ -24,12 +24,14 @@ in the `output` section of the +{beatname_lc}.yml+ config file. Only a single output may be defined. The following topics describe how to configure each supported output: + * <> * <> * <> * <> * <> * <> + endif::[] [[elasticsearch-output]] From d55da13ee08e2e998b052fdbcc310e82fb199d07 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 13:25:41 +0100 Subject: [PATCH 23/31] It's 'warn', not 'warning'. Found by @simitt --- docs/copied-from-beats/loggingconfig.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/copied-from-beats/loggingconfig.asciidoc b/docs/copied-from-beats/loggingconfig.asciidoc index 94536f34c38..76c578de8d6 100644 --- a/docs/copied-from-beats/loggingconfig.asciidoc +++ b/docs/copied-from-beats/loggingconfig.asciidoc @@ -63,7 +63,7 @@ errors, there will be no log file in the directory specified for logs. [[level]] ==== `logging.level` -Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default +Minimum log level. One of `debug`, `info`, `warn`, or `error`. The default log level is `info`. `debug`:: Logs debug messages, including a detailed printout of all events @@ -76,7 +76,7 @@ for all components. `info`:: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors. -`warning`:: Logs warnings, errors, and critical errors. +`warn`:: Logs warnings, errors, and critical errors. `error`:: Logs errors and critical errors. From 37663142194c4d7e1ac8870628c2b0a17d10933d Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 13:35:32 +0100 Subject: [PATCH 24/31] Update the rest of the max_retries sections to only talk about Filebeat for filebeat --- docs/copied-from-beats/outputconfig.asciidoc | 22 ++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/docs/copied-from-beats/outputconfig.asciidoc b/docs/copied-from-beats/outputconfig.asciidoc index 44c27d67a6e..6db060d4e5e 100644 --- a/docs/copied-from-beats/outputconfig.asciidoc +++ b/docs/copied-from-beats/outputconfig.asciidoc @@ -526,10 +526,13 @@ The number of seconds to wait for responses from the Logstash server before timi The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. -Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all -events are published. +ifeval::["{beatname_lc}" == "filebeat"] +Filebeat will ignore the `max_retries` setting and retry until all +events are published. +ifeval::["{beatname_lc}" != "filebeat"] Set `max_retries` to a value less than 0 to retry until all events are published. +endif::[] The default is 3. @@ -717,10 +720,13 @@ brokers, topics, partition, and active leaders to use for publishing. The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. -Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all -events are published. +ifeval::["{beatname_lc}" == "filebeat"] +Filebeat will ignore the `max_retries` setting and retry until all +events are published. +ifeval::["{beatname_lc}" != "filebeat"] Set `max_retries` to a value less than 0 to retry until all events are published. +endif::[] The default is 3. @@ -939,13 +945,17 @@ The Redis connection timeout in seconds. The default is 5 seconds. The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. -Some Beats, such as Filebeat, ignore the `max_retries` setting and retry until all -events are published. +ifeval::["{beatname_lc}" == "filebeat"] +Filebeat will ignore the `max_retries` setting and retry until all +events are published. +ifeval::["{beatname_lc}" != "filebeat"] Set `max_retries` to a value less than 0 to retry until all events are published. +endif::[] The default is 3. + ===== `bulk_max_size` The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. From 0b79cce6fe6ef7c69a44a435e05b864bb3425232 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Wed, 31 Jan 2018 13:41:18 +0100 Subject: [PATCH 25/31] Special case for apm-server as it was only introduced in 6.0 --- docs/copied-from-beats/shared-kibana-config.asciidoc | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/copied-from-beats/shared-kibana-config.asciidoc b/docs/copied-from-beats/shared-kibana-config.asciidoc index 4525cde26e6..8b1f79721bb 100644 --- a/docs/copied-from-beats/shared-kibana-config.asciidoc +++ b/docs/copied-from-beats/shared-kibana-config.asciidoc @@ -12,8 +12,16 @@ [[setup-kibana-endpoint]] == Set up the Kibana endpoint +ifeval::["{beatname_lc} == "apm-server"] +The Kibana dashboards are loaded into Kibana via the Kibana API. +This requires a Kibana endpoint configuration. +endif::[] + +ifeval::["{beatname_lc} != "apm-server"] Starting with {beatname_uc} 6.0.0, the Kibana dashboards are loaded into Kibana via the Kibana API. This requires a Kibana endpoint configuration. +endif::[] + You configure the endpoint in the `setup.kibana` section of the +{beatname_lc}.yml+ config file. From d3eeada2494cf3da766366890bed70aed46da09e Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 1 Feb 2018 10:03:09 +0100 Subject: [PATCH 26/31] Missing double quotes caused build to fail. --- docs/copied-from-beats/shared-kibana-config.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/copied-from-beats/shared-kibana-config.asciidoc b/docs/copied-from-beats/shared-kibana-config.asciidoc index 8b1f79721bb..d373a5bc0b3 100644 --- a/docs/copied-from-beats/shared-kibana-config.asciidoc +++ b/docs/copied-from-beats/shared-kibana-config.asciidoc @@ -12,12 +12,12 @@ [[setup-kibana-endpoint]] == Set up the Kibana endpoint -ifeval::["{beatname_lc} == "apm-server"] +ifeval::["{beatname_lc}" == "apm-server"] The Kibana dashboards are loaded into Kibana via the Kibana API. This requires a Kibana endpoint configuration. endif::[] -ifeval::["{beatname_lc} != "apm-server"] +ifeval::["{beatname_lc}" != "apm-server"] Starting with {beatname_uc} 6.0.0, the Kibana dashboards are loaded into Kibana via the Kibana API. This requires a Kibana endpoint configuration. endif::[] From 0e5d9cf4902a38ba9e64f3e6dd0c6cdcb4c8a257 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 1 Feb 2018 10:21:37 +0100 Subject: [PATCH 27/31] Moved Frontend support under Configuring + more restructure. --- docs/configuring.asciidoc | 3 +++ docs/context.asciidoc | 2 +- docs/error-api.asciidoc | 30 +++++++++++++++--------------- docs/errors.asciidoc | 6 +++--- docs/event-types.asciidoc | 7 ++++++- docs/index.asciidoc | 8 +++----- docs/intake-api.asciidoc | 2 +- docs/spans.asciidoc | 8 ++++---- docs/transaction-api.asciidoc | 32 ++++++++++++++++---------------- docs/transactions.asciidoc | 8 +++++--- 10 files changed, 57 insertions(+), 49 deletions(-) diff --git a/docs/configuring.asciidoc b/docs/configuring.asciidoc index 43c3acafce7..86f0d0ffa58 100644 --- a/docs/configuring.asciidoc +++ b/docs/configuring.asciidoc @@ -13,6 +13,7 @@ The following topics describe how to configure APM Server: * <> * <> * <> +* <> * <> * <> -- @@ -31,6 +32,8 @@ include::./copied-from-beats/shared-kibana-config.asciidoc[] include::./copied-from-beats/dashboardsconfig.asciidoc[] +include::./frontend.asciidoc[] + :standalone: include::./copied-from-beats/shared-env-vars.asciidoc[] diff --git a/docs/context.asciidoc b/docs/context.asciidoc index 93e5128bc77..2c0bb05f212 100644 --- a/docs/context.asciidoc +++ b/docs/context.asciidoc @@ -1,5 +1,5 @@ [float] -==== Context +=== Context An event's context bundles information regarding the environment in which it is recorded. It describes the `service` in which the event is captured, the `system` in which the monitored service is running and the event's `process` information. diff --git a/docs/error-api.asciidoc b/docs/error-api.asciidoc index 00671eef22e..3c5f81b4d0e 100644 --- a/docs/error-api.asciidoc +++ b/docs/error-api.asciidoc @@ -1,5 +1,5 @@ [[error-api]] -=== Error API +== Error API The APM Server exposes an API Endpoint to send error records. Unless you are implementing an agent, you don't need to know about the specifics of this API. @@ -12,7 +12,7 @@ The following section contains information about: [[error-endpoint]] [float] -==== Endpoint +=== Endpoint To send an error record you need to send a `HTTP POST` request to the APM Server `errors` endpoint: [source,bash] @@ -32,7 +32,7 @@ Information pertaining to the error record must be sent as a JSON object to the [[error-schema-definition]] [float] -==== Schema Definition +=== Schema Definition The APM Server uses a JSON Schema for validating the transaction requests. Find details on how the schema is defined: @@ -49,7 +49,7 @@ Find details on how the schema is defined: [[error-payload-schema]] [float] -===== Payload +==== Payload [source,json] ---- @@ -58,7 +58,7 @@ include::./spec/errors/payload.json[] [[error-error-schema]] [float] -===== Error +==== Error [source,json] ---- @@ -67,7 +67,7 @@ include::./spec/errors/error.json[] [[error-service-schema]] [float] -===== Service +==== Service [source,json] ---- @@ -76,7 +76,7 @@ include::./spec/service.json[] [[error-system-schema]] [float] -===== System +==== System [source,json] ---- @@ -85,7 +85,7 @@ include::./spec/system.json[] [[error-context-schema]] [float] -===== Context +==== Context [source,json] ---- @@ -94,7 +94,7 @@ include::./spec/context.json[] [[error-stacktraceframe-schema]] [float] -===== Stacktrace Frame +==== Stacktrace Frame [source,json] ---- @@ -103,7 +103,7 @@ include::./spec/stacktrace_frame.json[] [[error-request-schema]] [float] -===== Request +==== Request [source,json] ---- @@ -112,7 +112,7 @@ include::./spec/request.json[] [[error-user-schema]] [float] -===== User +==== User [source,json] ---- @@ -121,7 +121,7 @@ include::./spec/user.json[] [[error-api-examples]] [float] -==== Examples +=== Examples Send an example request to the APM Server: @@ -140,7 +140,7 @@ See examples on how an error request to the APM Server can look like: [[payload-with-error]] [float] -===== Payload with an Error +==== Payload with an Error [source,json] ---- @@ -149,7 +149,7 @@ include::./data/intake-api/generated/error/payload.json[] [[payload-with-minimal-exception]] [float] -===== Payload with an Error with minimal Exception Information +==== Payload with an Error with minimal Exception Information [source,json] ---- @@ -158,7 +158,7 @@ include::./data/intake-api/generated/error/minimal_payload_exception.json[] [[payload-with-minimal-log]] [float] -===== Payload with an Error with minimal Log Information +==== Payload with an Error with minimal Log Information [source,json] ---- diff --git a/docs/errors.asciidoc b/docs/errors.asciidoc index 2075c2bd478..76c6d66bdfb 100644 --- a/docs/errors.asciidoc +++ b/docs/errors.asciidoc @@ -1,5 +1,5 @@ [[errors]] -=== Errors +== Errors An error record represents one error event, captured by Elastic APM agents within one service. It is identified by a unique ID. @@ -11,7 +11,7 @@ include::./context.asciidoc[] [[errors-error]] [float] -==== Error +=== Error An error event contains at least information about the original `exception` that occured or information about a `log` that was created when the exception occured. @@ -29,7 +29,7 @@ via the `transaction.id`, indexed together with the error event. [[error-example]] [float] -==== Example document +=== Example document [source,json] ---- diff --git a/docs/event-types.asciidoc b/docs/event-types.asciidoc index 92732dc4288..7f239ca45c6 100644 --- a/docs/event-types.asciidoc +++ b/docs/event-types.asciidoc @@ -1,5 +1,7 @@ [[event-types]] -== Event Types += Event Types +[partintro] +-- Elastic APM agents capture information about `transaction`, `span` and `error` events and send the information to the APM Server. The actual available information might be optimized per agent. @@ -8,6 +10,9 @@ The actual available information might be optimized per agent. * <> * <> + +-- + include::./transactions.asciidoc[] include::./spans.asciidoc[] include::./errors.asciidoc[] diff --git a/docs/index.asciidoc b/docs/index.asciidoc index 4420764687e..fc81b675725 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -25,13 +25,11 @@ include::./installing.asciidoc[] include::./setting-up-and-running.asciidoc[] -include::./event-types.asciidoc[] - -include::./frontend.asciidoc[] - include::./configuring.asciidoc[] -include::./intake-api.asciidoc[] +include::./event-types.asciidoc[] + +//include::./intake-api.asciidoc[] include::./fields.asciidoc[] diff --git a/docs/intake-api.asciidoc b/docs/intake-api.asciidoc index 5451bdc043d..5533bd60b9d 100644 --- a/docs/intake-api.asciidoc +++ b/docs/intake-api.asciidoc @@ -1,5 +1,5 @@ [[intake-api]] -== Intake API += Intake API The APM Server exposes API Endpoints for diff --git a/docs/spans.asciidoc b/docs/spans.asciidoc index 7d60fec9127..7e719f4c39b 100644 --- a/docs/spans.asciidoc +++ b/docs/spans.asciidoc @@ -1,5 +1,5 @@ [[spans]] -=== Spans +== Spans A span contains information about a specific code path, executed as part of a <>. @@ -21,20 +21,20 @@ Some of the key attributes of a span are described in more detail: [[span-context]] [float] -==== Context +=== Context In case a database query was captured, the span's context contains information about this database access. The context also contains information about the service the agent is monitoring. [[span-span]] [float] -==== Span +=== Span The information available within the span group includes the duration of the recorded span, a unique ID within a transaction and an automatically retrieved name. Additionally a span can contain `stack trace` information. [[span-example]] [float] -==== Example document +=== Example document [source,json] ---- diff --git a/docs/transaction-api.asciidoc b/docs/transaction-api.asciidoc index 77213050736..bad38bf3f8f 100644 --- a/docs/transaction-api.asciidoc +++ b/docs/transaction-api.asciidoc @@ -1,5 +1,5 @@ [[transaction-api]] -=== Transaction API +== Transaction API The APM Server exposes an API Endpoint to send <>. Unless you are implementing an agent, you don't need to know about the specifics of this API. @@ -12,7 +12,7 @@ The following section contains information about: [[transaction-endpoint]] [float] -==== Endpoint +=== Endpoint To send a transaction record you need to send a `HTTP POST` request to the APM Server `transactions` endpoint: [source,bash] @@ -32,7 +32,7 @@ Information pertaining to the record must be sent as a JSON object. [[transaction-schema-definition]] [float] -==== Schema Definition +=== Schema Definition The APM Server uses a JSON Schema for validating the transaction requests. Find details on how the schema is defined: @@ -49,7 +49,7 @@ Find details on how the schema is defined: [[transaction-payload-schema]] [float] -===== Payload +==== Payload [source,json] ---- @@ -58,7 +58,7 @@ include::./spec/transactions/payload.json[] [[transaction-transaction-schema]] [float] -===== Transaction +==== Transaction [source,json] ---- @@ -67,7 +67,7 @@ include::./spec/transactions/transaction.json[] [[transaction-span-schema]] [float] -===== Span +==== Span [source,json] ---- @@ -76,7 +76,7 @@ include::./spec/transactions/span.json[] [[transaction-service-schema]] [float] -===== Service +==== Service [source,json] ---- @@ -85,7 +85,7 @@ include::./spec/service.json[] [[transaction-system-schema]] [float] -===== System +==== System [source,json] ---- @@ -94,7 +94,7 @@ include::./spec/system.json[] [[transaction-context-schema]] [float] -===== Context +==== Context [source,json] ---- @@ -103,7 +103,7 @@ include::./spec/context.json[] [[transaction-stacktraceframe-schema]] [float] -===== Stacktrace Frame +==== Stacktrace Frame [source,json] ---- @@ -112,7 +112,7 @@ include::./spec/stacktrace_frame.json[] [[transaction-request-schema]] [float] -===== Request +==== Request [source,json] ---- @@ -121,7 +121,7 @@ include::./spec/request.json[] [[transaction-user-schema]] [float] -===== User +==== User [source,json] ---- @@ -130,7 +130,7 @@ include::./spec/user.json[] [[transaction-api-examples]] [float] -==== Examples +=== Examples Send an example request to the APM Server: @@ -149,7 +149,7 @@ See examples on how a transaction request to the APM Server can look like: [[payload-with-transactions]] [float] -===== Payload with several Transactions +==== Payload with several Transactions [source,json] ---- @@ -158,7 +158,7 @@ include::./data/intake-api/generated/transaction/payload.json[] [[payload-with-minimal-transaction]] [float] -===== Payload with a minimal Transaction +==== Payload with a minimal Transaction [source,json] ---- @@ -167,7 +167,7 @@ include::./data/intake-api/generated/transaction/minimal_payload.json[] [[payload-with-minimal-span]] [float] -===== Payload with a Transaction with a minimal Span +==== Payload with a Transaction with a minimal Span [source,json] ---- diff --git a/docs/transactions.asciidoc b/docs/transactions.asciidoc index 2f4baac529d..2679da4a4ff 100644 --- a/docs/transactions.asciidoc +++ b/docs/transactions.asciidoc @@ -1,5 +1,5 @@ [[transactions]] -=== Transactions +== Transactions A transaction represents one event, captured by an Elastic APM agent within one service. It is identified by a unique ID. @@ -17,7 +17,8 @@ include::./context.asciidoc[] [[transaction-transaction]] [float] -==== Transaction + +=== Transaction The information available within the transaction group includes the duration of the transaction, a unique id, the type and an automatically retrieved name, as well as an indication whether or not the transaction was handled successfully. @@ -27,9 +28,10 @@ The transaction can also contain: * span_count.dropped: how many spans have not been captured, according to configuration on the agent side * marks: captures the timing in milliseconds of a significant event during the lifetime of a transaction, set by the user or the agent + [[transaction-example]] [float] -==== Example document +=== Example document [source,json] ---- From 2fedfbfe881e91fc7f482e498e1a4848f5ebe362 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 1 Feb 2018 10:26:51 +0100 Subject: [PATCH 28/31] Add intake-api back in. --- docs/event-types.asciidoc | 1 - docs/index.asciidoc | 2 +- docs/intake-api.asciidoc | 5 +++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/event-types.asciidoc b/docs/event-types.asciidoc index 7f239ca45c6..fb4dc843819 100644 --- a/docs/event-types.asciidoc +++ b/docs/event-types.asciidoc @@ -10,7 +10,6 @@ The actual available information might be optimized per agent. * <> * <> - -- include::./transactions.asciidoc[] diff --git a/docs/index.asciidoc b/docs/index.asciidoc index fc81b675725..2e28206f0b9 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -29,7 +29,7 @@ include::./configuring.asciidoc[] include::./event-types.asciidoc[] -//include::./intake-api.asciidoc[] +include::./intake-api.asciidoc[] include::./fields.asciidoc[] diff --git a/docs/intake-api.asciidoc b/docs/intake-api.asciidoc index 5533bd60b9d..a76acca5866 100644 --- a/docs/intake-api.asciidoc +++ b/docs/intake-api.asciidoc @@ -1,10 +1,11 @@ [[intake-api]] = Intake API - +[partintro] +-- The APM Server exposes API Endpoints for * <> * <> - +-- include::./transaction-api.asciidoc[] include::./error-api.asciidoc[] From 1bd0e7d54d97e89342383bf74fd687f5de2e0820 Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 1 Feb 2018 10:29:50 +0100 Subject: [PATCH 29/31] Removing "beta release" header. --- docs/guide/page_header.html | 1 - docs/page_header.html | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/guide/page_header.html b/docs/guide/page_header.html index bd5fda66f44..e69de29bb2d 100644 --- a/docs/guide/page_header.html +++ b/docs/guide/page_header.html @@ -1 +0,0 @@ -You are looking at documentation for a beta release. diff --git a/docs/page_header.html b/docs/page_header.html index bd5fda66f44..8b137891791 100644 --- a/docs/page_header.html +++ b/docs/page_header.html @@ -1 +1 @@ -You are looking at documentation for a beta release. + From 222618d13900411471e08445f3e0bf39588bd8fa Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 1 Feb 2018 10:32:36 +0100 Subject: [PATCH 30/31] Revert changes related to warn vs warning following https://github.com/elastic/beats/pull/6240 --- docs/copied-from-beats/loggingconfig.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/copied-from-beats/loggingconfig.asciidoc b/docs/copied-from-beats/loggingconfig.asciidoc index 76c578de8d6..94536f34c38 100644 --- a/docs/copied-from-beats/loggingconfig.asciidoc +++ b/docs/copied-from-beats/loggingconfig.asciidoc @@ -63,7 +63,7 @@ errors, there will be no log file in the directory specified for logs. [[level]] ==== `logging.level` -Minimum log level. One of `debug`, `info`, `warn`, or `error`. The default +Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default log level is `info`. `debug`:: Logs debug messages, including a detailed printout of all events @@ -76,7 +76,7 @@ for all components. `info`:: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors. -`warn`:: Logs warnings, errors, and critical errors. +`warning`:: Logs warnings, errors, and critical errors. `error`:: Logs errors and critical errors. From 0facb0f3a14158c7bf525cb2fa658f3d0ebab07b Mon Sep 17 00:00:00 2001 From: Ron Cohen Date: Thu, 1 Feb 2018 10:47:54 +0100 Subject: [PATCH 31/31] Remove page header files --- docs/guide/page_header.html | 0 docs/page_header.html | 1 - 2 files changed, 1 deletion(-) delete mode 100644 docs/guide/page_header.html delete mode 100644 docs/page_header.html diff --git a/docs/guide/page_header.html b/docs/guide/page_header.html deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/docs/page_header.html b/docs/page_header.html deleted file mode 100644 index 8b137891791..00000000000 --- a/docs/page_header.html +++ /dev/null @@ -1 +0,0 @@ -