[7.17] Remove Agent Debug Info (#187126) #187805
Merged
checks-reporter / X-Pack Chrome Functional tests / Group 3
succeeded
Jul 8, 2024 in 39m 31s
node scripts/functional_tests --bail --kibana-install-dir /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana-build-xpack --include-tag ciGroup3
Details
[truncated]
layer chart switch
└-> "before each" hook: global before each for "should transition from bar chart to line chart using layer chart switch"
└- ✓ pass (18.2s)
└-> should transition from pie chart to treemap chart
└-> "before each" hook: global before each for "should transition from pie chart to treemap chart"
└- ✓ pass (26.0s)
└-> should create a pie chart and switch to datatable
└-> "before each" hook: global before each for "should create a pie chart and switch to datatable"
└- ✓ pass (53.0s)
└-> should create a heatmap chart and transition to barchart
└-> "before each" hook: global before each for "should create a heatmap chart and transition to barchart"
└- ✓ pass (1.0m)
└-> should create a valid XY chart with references
└-> "before each" hook: global before each for "should create a valid XY chart with references"
└- ✓ pass (1.0m)
└-> should allow formatting on references
└-> "before each" hook: global before each for "should allow formatting on references"
└- ✓ pass (1.0m)
└-> should handle edge cases in reference-based operations
└-> "before each" hook: global before each for "should handle edge cases in reference-based operations"
└- ✓ pass (1.0m)
└-> should keep the field selection while transitioning to every reference-based operation
└-> "before each" hook: global before each for "should keep the field selection while transitioning to every reference-based operation"
└- ✓ pass (44.1s)
└-> should not leave an incomplete column in the visualization config with field-based operation
└-> "before each" hook: global before each for "should not leave an incomplete column in the visualization config with field-based operation"
└- ✓ pass (31.0s)
└-> should revert to previous configuration and not leave an incomplete column in the visualization config with reference-based operations
└-> "before each" hook: global before each for "should revert to previous configuration and not leave an incomplete column in the visualization config with reference-based operations"
└- ✓ pass (42.8s)
└-> should transition from unique count to last value
└-> "before each" hook: global before each for "should transition from unique count to last value"
└- ✓ pass (42.4s)
└-> should allow to change index pattern
└-> "before each" hook: global before each for "should allow to change index pattern"
└- ✓ pass (2.4s)
└-> should show a download button only when the configuration is valid
└-> "before each" hook: global before each for "should show a download button only when the configuration is valid"
└- ✓ pass (44.8s)
└-> should allow filtering by legend on an xy chart
└-> "before each" hook: global before each for "should allow filtering by legend on an xy chart"
└- ✓ pass (49.7s)
└-> should allow filtering by legend on a pie chart
└-> "before each" hook: global before each for "should allow filtering by legend on a pie chart"
└- ✓ pass (55.3s)
└-> "after all" hook: afterTestSuite.trigger for "should allow filtering by legend on a pie chart"
└-: lens query context
└-> "before all" hook: beforeTestSuite.trigger for "should carry over time range and pinned filters to discover"
└-> "before all" hook for "should carry over time range and pinned filters to discover"
└-> should carry over time range and pinned filters to discover
└-> "before each" hook: global before each for "should carry over time range and pinned filters to discover"
└- ✓ pass (41.8s)
└-> should remember time range and pinned filters from discover
└-> "before each" hook: global before each for "should remember time range and pinned filters from discover"
└- ✓ pass (26.4s)
└-> keep time range and pinned filters after refresh
└-> "before each" hook: global before each for "keep time range and pinned filters after refresh"
└- ✓ pass (5.4s)
└-> keeps selected index pattern after refresh
└-> "before each" hook: global before each for "keeps selected index pattern after refresh"
└- ✓ pass (4.5s)
└-> keeps time range and pinned filters after refreshing directly after saving
└-> "before each" hook: global before each for "keeps time range and pinned filters after refreshing directly after saving"
└- ✓ pass (38.1s)
└-: Navigation search
└-> "before all" hook: beforeTestSuite.trigger in "Navigation search"
└-: when opening from empty visualization to existing one
└-> "before all" hook: beforeTestSuite.trigger for "filters, time and query reflect the visualization state"
└-> "before all" hook for "filters, time and query reflect the visualization state"
└-> filters, time and query reflect the visualization state
└-> "before each" hook: global before each for "filters, time and query reflect the visualization state"
└- ✓ pass (160ms)
└-> preserves time range
└-> "before each" hook: global before each for "preserves time range"
└- ✓ pass (3.8s)
└-> loads filters
└-> "before each" hook: global before each for "loads filters"
└- ✓ pass (22ms)
└-> loads query
└-> "before each" hook: global before each for "loads query"
└- ✓ pass (21ms)
└-> "after all" hook: afterTestSuite.trigger for "loads query"
└-: when opening from existing visualization to empty one
└-> "before all" hook: beforeTestSuite.trigger for "preserves time range"
└-> "before all" hook for "preserves time range"
└-> preserves time range
└-> "before each" hook: global before each for "preserves time range"
└- ✓ pass (3.7s)
└-> cleans filters
└-> "before each" hook: global before each for "cleans filters"
└- ✓ pass (10.0s)
└-> cleans query
└-> "before each" hook: global before each for "cleans query"
└- ✓ pass (17ms)
└-> filters, time and query reflect the visualization state
└-> "before each" hook: global before each for "filters, time and query reflect the visualization state"
└- ✓ pass (50ms)
└-> "after all" hook: afterTestSuite.trigger for "filters, time and query reflect the visualization state"
└-> "after all" hook: afterTestSuite.trigger in "Navigation search"
└-: Switching in Visualize App
└-> "before all" hook: beforeTestSuite.trigger for "when moving from existing to empty workspace, preserves time range, cleans filters and query"
└-> when moving from existing to empty workspace, preserves time range, cleans filters and query
└-> "before each" hook: global before each for "when moving from existing to empty workspace, preserves time range, cleans filters and query"
└- ✓ pass (44.7s)
└-> when moving from empty to existing workspace, preserves time range and loads filters and query
└-> "before each" hook: global before each for "when moving from empty to existing workspace, preserves time range and loads filters and query"
└- ✓ pass (8.0s)
└-> "after all" hook: afterTestSuite.trigger for "when moving from empty to existing workspace, preserves time range and loads filters and query"
└-> "after all" hook for "keeps time range and pinned filters after refreshing directly after saving"
└-> "after all" hook: afterTestSuite.trigger for "keeps time range and pinned filters after refreshing directly after saving"
└-> "after all" hook: afterTestSuite.trigger in ""
└-> "after all" hook in "lens app"
└-> "after all" hook: afterTestSuite.trigger in "lens app"
│
│42 passing (23.0m)
│
│ warn browser[SEVERE] ERROR FETCHING BROWSR LOGS: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
│ proc [kibana] log [23:25:43.828] [info][plugins-system][standard] Stopping all plugins.
│ proc [kibana] log [23:25:43.830] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
│ info [kibana] exited with null after 1439.2 seconds
│ info [es] stopping node ftr
│ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
│ info [o.e.n.Node] [ftr] stopping ...
│ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
│ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
│ info [o.e.n.Node] [ftr] stopped
│ info [o.e.n.Node] [ftr] closing ...
│ info [o.e.n.Node] [ftr] closed
│ info [es] stopped
│ info [es] no debug files found, assuming es did not write any
│ info [es] cleanup complete
--- [2/2] Running x-pack/test/search_sessions_integration/config.ts
info Installing from snapshot
│ info version: 7.17.23
│ info install path: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr
│ info license: trial
│ info Downloading snapshot manifest from https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.23/archives/20240708-131119_42b93a53/manifest.json
│ info verifying cache of https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.23/archives/20240708-131119_42b93a53/elasticsearch-7.17.23-SNAPSHOT-linux-x86_64.tar.gz
│ info etags match, reusing cache from 2024-07-08T23:01:16.494Z
│ info extracting /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/cache/elasticsearch-7.17.23-SNAPSHOT-linux-x86_64.tar.gz
│ info extracted to /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr
│ info created /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr/ES_TMPDIR
│ info setting secure setting bootstrap.password to changeme
info [es] starting node ftr on port 9220
info Starting
│ERROR Jul 08, 2024 11:25:57 PM sun.util.locale.provider.LocaleProviderAdapter <clinit>
│ WARNING: COMPAT locale provider will be removed in a future release
│
│ info [o.e.n.Node] [ftr] version[7.17.23-SNAPSHOT], pid[5001], build[default/tar/42b93a534929add031e668becc4565463f2c4b32/2024-07-08T13:06:16.104506372Z], OS[Linux/5.15.0-1062-gcp/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/22.0.1/22.0.1+8-16]
│ info [o.e.n.Node] [ftr] JVM home [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr/jdk], using bundled JDK [true]
│ info [o.e.n.Node] [ftr] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Djava.security.manager=allow, -XX:+UseG1GC, -Djava.io.tmpdir=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr/ES_TMPDIR, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:+UnlockDiagnosticVMOptions, -XX:G1NumCollectionsKeepPinned=10000000, -Xms1536m, -Xmx1536m, -XX:MaxDirectMemorySize=805306368, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr, -Des.path.conf=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr/config, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]
│ info [o.e.n.Node] [ftr] version [7.17.23-SNAPSHOT] is a pre-release version of Elasticsearch and is not suitable for production
│ info [o.e.p.PluginsService] [ftr] loaded module [aggs-matrix-stats]
│ info [o.e.p.PluginsService] [ftr] loaded module [analysis-common]
│ info [o.e.p.PluginsService] [ftr] loaded module [constant-keyword]
│ info [o.e.p.PluginsService] [ftr] loaded module [frozen-indices]
│ info [o.e.p.PluginsService] [ftr] loaded module [ingest-common]
│ info [o.e.p.PluginsService] [ftr] loaded module [ingest-geoip]
│ info [o.e.p.PluginsService] [ftr] loaded module [ingest-user-agent]
│ info [o.e.p.PluginsService] [ftr] loaded module [kibana]
│ info [o.e.p.PluginsService] [ftr] loaded module [lang-expression]
│ info [o.e.p.PluginsService] [ftr] loaded module [lang-mustache]
│ info [o.e.p.PluginsService] [ftr] loaded module [lang-painless]
│ info [o.e.p.PluginsService] [ftr] loaded module [legacy-geo]
│ info [o.e.p.PluginsService] [ftr] loaded module [mapper-extras]
│ info [o.e.p.PluginsService] [ftr] loaded module [mapper-version]
│ info [o.e.p.PluginsService] [ftr] loaded module [parent-join]
│ info [o.e.p.PluginsService] [ftr] loaded module [percolator]
│ info [o.e.p.PluginsService] [ftr] loaded module [rank-eval]
│ info [o.e.p.PluginsService] [ftr] loaded module [reindex]
│ info [o.e.p.PluginsService] [ftr] loaded module [repositories-metering-api]
│ info [o.e.p.PluginsService] [ftr] loaded module [repository-encrypted]
│ info [o.e.p.PluginsService] [ftr] loaded module [repository-url]
│ info [o.e.p.PluginsService] [ftr] loaded module [runtime-fields-common]
│ info [o.e.p.PluginsService] [ftr] loaded module [search-business-rules]
│ info [o.e.p.PluginsService] [ftr] loaded module [searchable-snapshots]
│ info [o.e.p.PluginsService] [ftr] loaded module [snapshot-repo-test-kit]
│ info [o.e.p.PluginsService] [ftr] loaded module [spatial]
│ info [o.e.p.PluginsService] [ftr] loaded module [test-delayed-aggs]
│ info [o.e.p.PluginsService] [ftr] loaded module [test-die-with-dignity]
│ info [o.e.p.PluginsService] [ftr] loaded module [test-error-query]
│ info [o.e.p.PluginsService] [ftr] loaded module [transform]
│ info [o.e.p.PluginsService] [ftr] loaded module [transport-netty4]
│ info [o.e.p.PluginsService] [ftr] loaded module [unsigned-long]
│ info [o.e.p.PluginsService] [ftr] loaded module [vector-tile]
│ info [o.e.p.PluginsService] [ftr] loaded module [vectors]
│ info [o.e.p.PluginsService] [ftr] loaded module [wildcard]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-aggregate-metric]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-analytics]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async-search]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-autoscaling]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ccr]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-core]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-data-streams]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-deprecation]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-enrich]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-eql]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-fleet]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-graph]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-identity-provider]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ilm]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-logstash]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ml]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-monitoring]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ql]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-rollup]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-security]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-shutdown]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-sql]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-stack]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-text-structure]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-voting-only-node]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-watcher]
│ info [o.e.p.PluginsService] [ftr] no plugins loaded
│ info [o.e.e.NodeEnvironment] [ftr] using [1] data paths, mounts [[/opt/local-ssd (/dev/nvme0n1)]], net usable_space [343.5gb], net total_space [368gb], types [ext4]
│ info [o.e.e.NodeEnvironment] [ftr] heap size [1.5gb], compressed ordinary object pointers [true]
│ info [o.e.n.Node] [ftr] node name [ftr], node ID [DdYet8mKQvW_UZd4rCBXoA], cluster name [job-kibana-default-ciGroup3-cluster-ftr], roles [transform, data_frozen, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]
│ info [o.e.x.m.p.l.CppLogMessageHandler] [ftr] [controller/5170] [Main.cc@122] controller (64 bit): Version 7.17.23-SNAPSHOT (Build 3e4489a02bea5d) Copyright (c) 2024 Elasticsearch BV
│ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
│ info [o.e.x.s.a.s.FileRolesStore] [ftr] parsed [0] roles from file [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr/config/roles.yml]
│ info [o.e.i.g.ConfigDatabases] [ftr] initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr/config/ingest-geoip] for changes
│ info [o.e.i.g.DatabaseNodeService] [ftr] initialized database registry, using geoip-databases directory [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup3-cluster-ftr/ES_TMPDIR/geoip-databases/DdYet8mKQvW_UZd4rCBXoA]
│ info [o.e.t.NettyAllocator] [ftr] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
│ info [o.e.i.r.RecoverySettings] [ftr] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
│ info [o.e.d.DiscoveryModule] [ftr] using discovery type [single-node] and seed hosts providers [settings]
│ info [o.e.g.DanglingIndicesState] [ftr] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
│ info [o.e.n.Node] [ftr] initialized
│ info [o.e.n.Node] [ftr] starting ...
│ info [o.e.x.s.c.f.PersistentCache] [ftr] persistent cache index loaded
│ info [o.e.x.d.l.DeprecationIndexingComponent] [ftr] deprecation component started
│ info [o.e.t.TransportService] [ftr] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-alerts-7] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-es] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-kibana] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-logstash] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-beats] with version [7]
│ info [o.e.c.c.Coordinator] [ftr] setting initial configuration to VotingConfiguration{DdYet8mKQvW_UZd4rCBXoA}
│ info [o.e.c.s.MasterService] [ftr] elected-as-master ([1] nodes joined)[{ftr}{DdYet8mKQvW_UZd4rCBXoA}{mLNOux3pSPS03IdWLdTjQQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{ftr}{DdYet8mKQvW_UZd4rCBXoA}{mLNOux3pSPS03IdWLdTjQQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}
│ info [o.e.c.c.CoordinationState] [ftr] cluster UUID set to [aeVO03PXQn2FCO35r_-GVg]
│ info [o.e.c.s.ClusterApplierService] [ftr] master node changed {previous [], current [{ftr}{DdYet8mKQvW_UZd4rCBXoA}{mLNOux3pSPS03IdWLdTjQQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
│ info [o.e.h.AbstractHttpServerTransport] [ftr] publish_address {127.0.0.1:9220}, bound_addresses {[::1]:9220}, {127.0.0.1:9220}
│ info [o.e.n.Node] [ftr] started
│ info [o.e.g.GatewayService] [ftr] recovered [0] indices into cluster_state
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-stats] for index patterns [.ml-stats-*]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-state] for index patterns [.ml-state*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-settings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [data-streams-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-settings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-settings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [ilm-history] for index patterns [ilm-history-5*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.slm-history] for index patterns [.slm-history-5*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-settings]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.security-7] creating index, cause [api], templates [], shards [1]/[0]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [logs] for index patterns [logs-*-*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [metrics] for index patterns [metrics-*-*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [synthetics] for index patterns [synthetics-*-*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ml-size-based-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [synthetics]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [30-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [7-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [logs]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [metrics]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [90-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [365-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [180-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [watch-history-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ilm-history-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [slm-history-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.deprecation-indexing-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [system_indices_superuser]
│ info [o.e.l.LicenseService] [ftr] license [0aa35e0a-6de6-4121-b0b9-31618ad47afc] mode [trial] - valid
│ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
│ info [o.e.x.s.s.SecurityStatusChangeListener] [ftr] Active license is now [TRIAL]; Security is enabled
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [system_indices_superuser]
│ info starting [kibana] > /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana-build-xpack/bin/kibana --logging.json=false --server.port=5620 --elasticsearch.hosts=http://localhost:9220 --elasticsearch.username=kibana_system --elasticsearch.password=changeme --data.search.aggs.shardDelay.enabled=true --security.showInsecureClusterWarning=false --telemetry.banner=false --telemetry.optIn=false --telemetry.sendUsageTo=staging --server.maxPayload=1679958 --plugin-path=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana/test/common/fixtures/plugins/newsfeed --newsfeed.service.urlRoot=http://localhost:5620 --newsfeed.service.pathTemplate=/api/_newsfeed-FTS-external-service-simulators/kibana/v{VERSION}.json --logging.appenders.deprecation.type=console --logging.appenders.deprecation.layout.type=json --logging.loggers[0].name=elasticsearch.deprecation --logging.loggers[0].level=all --logging.loggers[0].appenders[0]=deprecation --status.allowAnonymous=true --server.uuid=5b2de169-2785-441b-ae8c-186a1936b17d --xpack.maps.showMapsInspectorAdapter=true --xpack.maps.preserveDrawingBuffer=true --xpack.security.encryptionKey="wuGNaIhoMpk5sO4UBxgr3NyW1sFcLgIf" --xpack.encryptedSavedObjects.encryptionKey="DkdXazszSCYexXqz4YktBGHCRkV6hyNK" --xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled=true --savedObjects.maxImportPayloadBytes=10485760 --xpack.siem.enabled=true
│ proc [kibana] Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/7.17/production.html#openssl-legacy-provider
│ proc [kibana] log [23:26:28.766] [info][plugins-service] Plugin "metricsEntities" is disabled.
│ proc [kibana] log [23:26:28.859] [info][server][Preboot][http] http server running at http://localhost:5620
│ proc [kibana] log [23:26:28.909] [warning][config][deprecation] Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format.
│ proc [kibana] log [23:26:28.910] [warning][config][deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
│ proc [kibana] log [23:26:28.910] [warning][config][deprecation] Setting "security.showInsecureClusterWarning" has been replaced by "xpack.security.showInsecureClusterWarning"
│ proc [kibana] log [23:26:28.911] [warning][config][deprecation] User sessions will automatically time out after 8 hours of inactivity starting in 8.0. Override this value to change the timeout.
│ proc [kibana] log [23:26:28.912] [warning][config][deprecation] Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout.
│ proc [kibana] log [23:26:28.912] [warning][config][deprecation] Setting "xpack.siem.enabled" has been replaced by "xpack.securitySolution.enabled"
│ proc [kibana] log [23:26:29.062] [info][plugins-system][standard] Setting up [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
│ proc [kibana] log [23:26:29.085] [info][plugins][taskManager] TaskManager is identified by the Kibana UUID: 5b2de169-2785-441b-ae8c-186a1936b17d
│ proc [kibana] log [23:26:29.222] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
│ proc [kibana] log [23:26:29.245] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
│ proc [kibana] log [23:26:29.266] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
│ proc [kibana] log [23:26:29.291] [info][encryptedSavedObjects][plugins] Hashed 'xpack.encryptedSavedObjects.encryptionKey' for this instance: nnkvE7kjGgidcjXzmLYBbIh4THhRWI1/7fUjAEaJWug=
│ proc [kibana] log [23:26:29.332] [info][plugins][ruleRegistry] Installing common resources shared between all indices
│ proc [kibana] log [23:26:29.835] [info][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
│ proc [kibana] log [23:26:30.117] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
│ proc [kibana] log [23:26:30.118] [info][savedobjects-service] Starting saved objects migrations
│ proc [kibana] log [23:26:30.177] [info][savedobjects-service] [.kibana] INIT -> CREATE_NEW_TARGET. took: 26ms.
│ proc [kibana] log [23:26:30.185] [info][savedobjects-service] [.kibana_task_manager] INIT -> CREATE_NEW_TARGET. took: 31ms.
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_task_manager_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_task_manager_7.17.23_001]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_7.17.23_001]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_task_manager_7.17.23_001][0], [.kibana_7.17.23_001][0]]]).
│ proc [kibana] log [23:26:30.484] [info][savedobjects-service] [.kibana] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 308ms.
│ proc [kibana] log [23:26:30.487] [info][savedobjects-service] [.kibana_task_manager] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 302ms.
│ proc [kibana] log [23:26:30.594] [info][savedobjects-service] [.kibana] MARK_VERSION_INDEX_READY -> DONE. took: 110ms.
│ proc [kibana] log [23:26:30.595] [info][savedobjects-service] [.kibana] Migration completed after 445ms
│ proc [kibana] log [23:26:30.628] [info][savedobjects-service] [.kibana_task_manager] MARK_VERSION_INDEX_READY -> DONE. took: 141ms.
│ proc [kibana] log [23:26:30.629] [info][savedobjects-service] [.kibana_task_manager] Migration completed after 474ms
│ proc [kibana] log [23:26:30.634] [info][plugins-system][standard] Starting [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
│ proc [kibana] log [23:26:31.973] [info][monitoring][monitoring][plugins] config sourced from: production cluster
│ proc [kibana] log [23:26:33.671] [info][server][Kibana][http] http server running at http://localhost:5620
│ proc [kibana] log [23:26:33.741] [info][status] Kibana is now degraded
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.alerts-ilm-policy]
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_task_manager_7.17.23_001/8-_T6xr0SI2A-qgnYECbBA] update_mapping [_doc]
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/d0TdB2cXR1ePJz6nR3SRxA] update_mapping [_doc]
│ proc [kibana] log [23:26:33.992] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-technical-mappings]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-agent-configuration] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-agent-configuration]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-custom-link] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-custom-link]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-ecs-mappings]
│ proc [kibana] log [23:26:34.577] [info][plugins][ruleRegistry] Installed common resources shared between all indices
│ proc [kibana] log [23:26:34.578] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.uptime.alerts
│ proc [kibana] log [23:26:34.580] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.logs.alerts
│ proc [kibana] log [23:26:34.581] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.metrics.alerts
│ proc [kibana] log [23:26:34.581] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.apm.alerts
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana_security_session_index_template_1] for index patterns [.kibana_security_session_1]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.apm-agent-configuration][0], [.apm-custom-link][0]]]).
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.uptime.alerts-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.metrics.alerts-mappings]
│ proc [kibana] log [23:26:34.818] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
│ proc [kibana] log [23:26:34.875] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.apm.alerts-mappings]
│ proc [kibana] log [23:26:34.926] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.logs.alerts-mappings]
│ proc [kibana] log [23:26:34.990] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_security_session_1] creating index, cause [api], templates [.kibana_security_session_index_template_1], shards [1]/[0]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]]).
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/d0TdB2cXR1ePJz6nR3SRxA] update_mapping [_doc]
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/d0TdB2cXR1ePJz6nR3SRxA] update_mapping [_doc]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-event-log-policy]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana-event-log-7.17.23-snapshot-template] for index patterns [.kibana-event-log-7.17.23-snapshot-*]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana-event-log-7.17.23-snapshot-000001] creating index, cause [api], templates [.kibana-event-log-7.17.23-snapshot-template], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana-event-log-7.17.23-snapshot-000001]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.17.23-snapshot-000001][0]]]).
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
│ proc [kibana] log [23:26:36.549] [info][chromium][plugins][reporting] Browser executable: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479271296638923/elastic/kibana-pull-request/kibana-build-xpack/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell
│ proc [kibana] log [23:26:36.591] [info][plugins][reporting][store] Creating ILM policy for managing reporting indices: kibana-reporting
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-reporting]
│ proc [kibana] log [23:26:36.620] [info][plugins][securitySolution] Dependent plugin setup complete - Starting ManifestTask
│ proc [kibana] log [23:26:37.145] [info][0][1][endpoint:metadata-check-transforms-task:0][plugins][securitySolution] no endpoint metadata transforms found
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.ds-ilm-history-5-2024.07.08-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
│ info [o.e.c.m.MetadataCreateDataStreamService] [ftr] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2024.07.08-000001], backing indices [], and aliases []
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.08-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2024.07.08-000001][0]]]).
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.08-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.08-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
│ proc [kibana] log [23:26:42.771] [info][status] Kibana is now available (was degraded)
│ info Only running suites which are compatible with ES version 7.17.23
│ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup3' ]
│ info Remote initialized: chrome-headless-shell 126.0.6478.126
│ info chromedriver version: 126.0.6478.126 (d36ace6122e0a59570e258d82441395206d60e1c-refs/branch-heads/6478@{#1591})
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] updated role [system_indices_superuser]
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] updated user [system_indices_superuser]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_logstash_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_canvas_all]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_discover_all]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_dashboard_read]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_discover_read]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_visualize_read]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_visualize_all]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_dashboard_all]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_maps_all]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_maps_read]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoshape_data_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [antimeridian_points_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [antimeridian_shapes_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [meta_for_geoshape_data_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoconnections_data_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_logs_data_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoall_data_writer]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_index_pattern_management_all]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_devtools_read]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_ccr_role]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_upgrade_assistant_role]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_rollups_role]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_rollup_reader]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_api_keys]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_security]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [ccr_user]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_ilm]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [index_management_user]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [snapshot_restore_user]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [ingest_pipelines_user]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [license_management_user]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [logstash_read_user]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [remote_clusters_user]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_alerts_logs_all_else_read]
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [test_user]
│ info Only running suites which are compatible with ES version 7.17.23
│ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup3' ]
│ info Starting tests
│ warn debug logs are being captured, only error logs will be written to the console
│
└-: Dashboard
└-> "before all" hook: beforeTestSuite.trigger in "Dashboard"
└-> "before all" hook in "Dashboard"
└-: dashboard with async search
└-> "before all" hook: beforeTestSuite.trigger for "not delayed should load"
└-> "before all" hook for "not delayed should load"
└-> not delayed should load
└-> "before each" hook: global before each for "not delayed should load"
└-> "before each" hook for "not delayed should load"
└- ✓ pass (13.8s)
└-> delayed should load
└-> "before each" hook: global before each for "delayed should load"
└-> "before each" hook for "delayed should load"
└- ✓ pass (23.5s)
└-> timed out should show error
└-> "before each" hook: global before each for "timed out should show error"
└-> "before each" hook for "timed out should show error"
└- ✓ pass (18.7s)
└-> multiple searches are grouped and only single error popup is shown
└-> "before each" hook: global before each for "multiple searches are grouped and only single error popup is shown"
└-> "before each" hook for "multiple searches are grouped and only single error popup is shown"
└- ✓ pass (53.0s)
└-> "after all" hook: afterTestSuite.trigger for "multiple searches are grouped and only single error popup is shown"
└-: save a search sessions
└-> "before all" hook: beforeTestSuite.trigger for "Restore using non-existing sessionId errors out. Refresh starts a new session and completes. Back button restores a session."
└-> "before all" hook for "Restore using non-existing sessionId errors out. Refresh starts a new session and completes. Back button restores a session."
└-> Restore using non-existing sessionId errors out. Refresh starts a new session and completes. Back button restores a session.
└-> "before each" hook: global before each for "Restore using non-existing sessionId errors out. Refresh starts a new session and completes. Back button restores a session."
└-> "before each" hook for "Restore using non-existing sessionId errors out. Refresh starts a new session and completes. Back button restores a session."
└- ✓ pass (38.3s)
└-> Saves and restores a session
└-> "before each" hook: global before each for "Saves and restores a session"
└-> "before each" hook for "Saves and restores a session"
└- ✓ pass (47.3s)
└-> "after all" hook for "Saves and restores a session"
└-> "after all" hook: afterTestSuite.trigger for "Saves and restores a session"
└-: save a search sessions with relative time
└-> "before all" hook: beforeTestSuite.trigger for "Saves and restores a session with relative time ranges"
└-> "before all" hook for "Saves and restores a session with relative time ranges"
└-> Saves and restores a session with relative time ranges
└-> "before each" hook: global before each for "Saves and restores a session with relative time ranges"
└-> "before each" hook for "Saves and restores a session with relative time ranges"
└- ✓ pass (1.0m)
└-> "after all" hook for "Saves and restores a session with relative time ranges"
└-> "after all" hook: afterTestSuite.trigger for "Saves and restores a session with relative time ranges"
└-: search sessions tour
└-> "before all" hook: beforeTestSuite.trigger for "search session popover auto opens when search is taking a while"
└-> "before all" hook for "search session popover auto opens when search is taking a while"
└-> search session popover auto opens when search is taking a while
└-> "before each" hook: global before each for "search session popover auto opens when search is taking a while"
└-> "before each" hook for "search session popover auto opens when search is taking a while"
└-> "before each" hook for "search session popover auto opens when search is taking a while"
└- ✓ pass (25.4s)
└-> "after all" hook for "search session popover auto opens when search is taking a while"
└-> "after all" hook: afterTestSuite.trigger for "search session popover auto opens when search is taking a while"
└-: dashboard in space
└-: Storing search sessions in space
└-> Saves and restores a session
└-: Disabled storing search sessions
└-> Doesn't allow to store a session
└-> "after all" hook in "Dashboard"
└-> "after all" hook: afterTestSuite.trigger in "Dashboard"
└-: Search session sharing
└-> "before all" hook: beforeTestSuite.trigger in "Search session sharing"
└-> "before all" hook in "Search session sharing"
└-: Search session sharing with lens
└-> "before all" hook: beforeTestSuite.trigger for "should share search session with by value lens and don't share with by reference"
└-> "before all" hook for "should share search session with by value lens and don't share with by reference"
└-> should share search session with by value lens and don't share with by reference
└-> "before each" hook: global before each for "should share search session with by value lens and don't share with by reference"
└- ✓ pass (1.0m)
└-> "after all" hook for "should share search session with by value lens and don't share with by reference"
└-> "after all" hook: afterTestSuite.trigger for "should share search session with by value lens and don't share with by reference"
└-> "after all" hook: afterTestSuite.trigger in "Search session sharing"
└-: Discover
└-> "before all" hook: beforeTestSuite.trigger in "Discover"
└-> "before all" hook in "Discover"
│ proc [kibana] {"ecs":{"version":"1.12.0"},"@timestamp":"2024-07-08T23:34:47.638+00:00","message":"Elasticsearch deprecation: 299 Elasticsearch-7.17.23-SNAPSHOT-42b93a534929add031e668becc4565463f2c4b32 \"this request accesses system indices: [.async-search, .security-7, .tasks], but in a future major version, direct access to system indices will be prevented by default\"\nOrigin:kibana\nQuery:\n200 - 2.0B\nGET /_all/_rollup/data","log":{"level":"DEBUG","logger":"elasticsearch.deprecation"},"process":{"pid":5210}}
└-: discover async search
└-> "before all" hook: beforeTestSuite.trigger for "search session id should change between searches"
└-> "before all" hook for "search session id should change between searches"
└-> search session id should change between searches
└-> "before each" hook: global before each for "search session id should change between searches"
└-> "before each" hook for "search session id should change between searches"
└- ✓ pass (17.0s)
└-> search session id should be picked up from the URL, non existing session id errors out, back button restores a session
└-> "before each" hook: global before each for "search session id should be picked up from the URL, non existing session id errors out, back button restores a session"
└-> "before each" hook for "search session id should be picked up from the URL, non existing session id errors out, back button restores a session"
└- ✓ pass (33.1s)
└-> navigation to context cleans the session
└-> "before each" hook: global before each for "navigation to context cleans the session"
└-> "before each" hook for "navigation to context cleans the session"
└- ✓ pass (2.8s)
└-> relative timerange works
└-> "before each" hook: global before each for "relative timerange works"
└-> "before each" hook for "relative timerange works"
└- ✓ pass (39.6s)
└-> "after all" hook for "relative timerange works"
└-> "after all" hook: afterTestSuite.trigger for "relative timerange works"
└-: discover in space
└-: Storing search sessions in space
└-> Saves and restores a session
└-: Disabled storing search sessions in space
└-> Doesn't allow to store a session
└-> "after all" hook: afterTestSuite.trigger in "Discover"
└-: lens search sessions
└-> "before all" hook: beforeTestSuite.trigger in "lens search sessions"
└-> "before all" hook in "lens search sessions"
└-: lens search sessions
└-> "before all" hook: beforeTestSuite.trigger for "doesn't shows search sessions indicator UI"
└-> "before all" hook for "doesn't shows search sessions indicator UI"
└-> doesn't shows search sessions indicator UI
└-> "before each" hook: global before each for "doesn't shows search sessions indicator UI"
└- ✓ pass (18.0s)
└-> "after all" hook for "doesn't shows search sessions indicator UI"
└-> "after all" hook: afterTestSuite.trigger for "doesn't shows search sessions indicator UI"
└-> "after all" hook: afterTestSuite.trigger in "lens search sessions"
└-: search sessions management
└-> "before all" hook: beforeTestSuite.trigger in "search sessions management"
└-> "before all" hook in "search sessions management"
└-: Search Sessions Management UI
└-> "before all" hook: beforeTestSuite.trigger in "Search Sessions Management UI"
└-: New search sessions
└-> "before all" hook: beforeTestSuite.trigger for "Saves a session and verifies it in the Management app"
└-> "before all" hook for "Saves a session and verifies it in the Management app"
└-> Saves a session and verifies it in the Management app
└-> "before each" hook: global before each for "Saves a session and verifies it in the Management app"
└- ✓ pass (19.5s)
└-> Deletes a session from management
└-> "before each" hook: global before each for "Deletes a session from management"
└- ✓ pass (17.0s)
└-> "after all" hook for "Deletes a session from management"
└-> "after all" hook: afterTestSuite.trigger for "Deletes a session from management"
└-: Archived search sessions
└-> "before all" hook: beforeTestSuite.trigger for "shows no items found"
└-> "before all" hook for "shows no items found"
└-> shows no items found
└-> "before each" hook: global before each for "shows no items found"
└- ✓ pass (10.1s)
└-> autorefreshes and shows items on the server
└-> "before each" hook: global before each for "autorefreshes and shows items on the server"
└- ✓ pass (9.8s)
└-> has working pagination controls
└-> "before each" hook: global before each for "has working pagination controls"
└- ✓ pass (3.2s)
└-> "after all" hook for "has working pagination controls"
└-> "after all" hook: afterTestSuite.trigger for "has working pagination controls"
└-> "after all" hook: afterTestSuite.trigger in "Search Sessions Management UI"
└-: Search Sessions Management UI permissions
└-> "before all" hook: beforeTestSuite.trigger in "Search Sessions Management UI permissions"
└-: Sessions management is not available
└-> "before all" hook: beforeTestSuite.trigger for "if no apps enable search sessions"
└-> "before all" hook for "if no apps enable search sessions"
└-> if no apps enable search sessions
└-> "before each" hook: global before each for "if no apps enable search sessions"
└- ✓ pass (5.4s)
└-> "after all" hook for "if no apps enable search sessions"
└-> "after all" hook: afterTestSuite.trigger for "if no apps enable search sessions"
└-: Sessions management is available
└-> "before all" hook: beforeTestSuite.trigger for "if one app enables search sessions"
└-> "before all" hook for "if one app enables search sessions"
└-> if one app enables search sessions
└-> "before each" hook: global before each for "if one app enables search sessions"
└- ✓ pass (8.1s)
└-> "after all" hook for "if one app enables search sessions"
└-> "after all" hook: afterTestSuite.trigger for "if one app enables search sessions"
└-> "after all" hook: afterTestSuite.trigger in "Search Sessions Management UI permissions"
└-> "after all" hook: afterTestSuite.trigger in "search sessions management"
│
│21 passing (12.0m)
│4 pending
│
│ warn browser[SEVERE] ERROR FETCHING BROWSR LOGS: ECONNREFUSED connect ECONNREFUSED 127.0.0.1:42453
│ proc [kibana] log [23:38:51.554] [info][plugins-system][standard] Stopping all plugins.
│ proc [kibana] log [23:38:51.556] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
│ info [kibana] exited with null after 756.9 seconds
│ info [es] stopping node ftr
│ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
│ info [o.e.n.Node] [ftr] stopping ...
│ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
│ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
│ info [o.e.n.Node] [ftr] stopped
│ info [o.e.n.Node] [ftr] closing ...
│ info [o.e.n.Node] [ftr] closed
│ info [es] stopped
│ info [es] no debug files found, assuming es did not write any
│ info [es] cleanup complete
Loading