How you deploy {kib} largely depends on your use case. If you are the only user, you can run {kib} on your local machine and configure it to point to whatever {es} instance you want to interact with. Conversely, if you have a large number of heavy {kib} users, you might need to load balance across multiple {kib} instances that are all connected to the same {es} instance.
While {kib} isn’t terribly resource intensive, we still recommend running {kib} separate from your {es} data or master nodes. To distribute {kib} traffic across the nodes in your {es} cluster, you can configure {kib} to use a list of {es} hosts.
To serve multiple {kib} installations behind a load balancer, you must change the configuration. See {kibana-ref}/settings.html[Configuring {kib}] for details on each setting.
Settings unique across each {kib} instance:
server.uuid
server.name
Settings unique across each host (for example, running multiple installations on the same virtual machine):
logging.dest
path.data
pid.file
server.port
Settings that must be the same:
xpack.security.encryptionKey //decrypting session information
xpack.reporting.encryptionKey //decrypting reports
xpack.encryptedSavedObjects.encryptionKey // decrypting saved objects
xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys // saved objects encryption key rotation, if any
Separate configuration files can be used from the command line by using the -c
flag:
bin/kibana -c config/instance1.yml
bin/kibana -c config/instance2.yml
To access multiple load-balanced {kib} clusters from the same browser,
set xpack.security.cookieName
in the configuration.
This avoids conflicts between cookies from the different {kib} instances.
In each cluster, {kib} instances should have the same cookieName
value. This will achieve seamless high availability and keep the session
active in case of failure from the currently used instance.
{kib} can be configured to connect to multiple {es} nodes in the same cluster. In situations where a node becomes unavailable, {kib} will transparently connect to an available node and continue operating. Requests to available hosts will be routed in a round robin fashion.
In kibana.yml:
elasticsearch.hosts:
- http://elasticsearch1:9200
- http://elasticsearch2:9200
Related configurations include elasticsearch.sniffInterval
, elasticsearch.sniffOnStart
, and elasticsearch.sniffOnConnectionFault
.
These can be used to automatically update the list of hosts as a cluster is resized. Parameters can be found on the {kibana-ref}/settings.html[settings page].
Kibana has a default memory limit that scales based on total memory available. In some scenarios, such as large reporting jobs, it may make sense to tweak limits to meet more specific requirements.
A limit can be defined by setting --max-old-space-size
in the node.options
config file found inside the kibana/config
folder or any other folder configured with the environment variable KBN_PATH_CONF
. For example, in the Debian-based system, the folder is /etc/kibana
.
The option accepts a limit in MB:
--max-old-space-size=2048