Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[7.17] Remove Agent Debug Info (#187126) #187805

Merged
merged 1 commit into from
Jul 9, 2024

Remove Agent Debug Info (#187126)

a29ce7a
Select commit
Loading
Failed to load commit list.
Merged

[7.17] Remove Agent Debug Info (#187126) #187805

Remove Agent Debug Info (#187126)
a29ce7a
Select commit
Loading
Failed to load commit list.
checks-reporter / X-Pack Chrome Functional tests / Group 2 succeeded Jul 8, 2024 in 42m 1s

node scripts/functional_tests --bail --kibana-install-dir /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479270603256785/elastic/kibana-pull-request/kibana-build-xpack --include-tag ciGroup2

Details

[truncated]
info [o.e.n.Node] [ftr] starting ...
   │ info [o.e.x.s.c.f.PersistentCache] [ftr] persistent cache index loaded
   │ info [o.e.x.d.l.DeprecationIndexingComponent] [ftr] deprecation component started
   │ info [o.e.t.TransportService] [ftr] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-alerts-7] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-es] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-kibana] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-logstash] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-beats] with version [7]
   │ info [o.e.c.c.Coordinator] [ftr] setting initial configuration to VotingConfiguration{FvYLtZG9Q16h-9VgqYKWdQ}
   │ info [o.e.c.s.MasterService] [ftr] elected-as-master ([1] nodes joined)[{ftr}{FvYLtZG9Q16h-9VgqYKWdQ}{2qpdZcK8Q169lkrRn59OeQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{ftr}{FvYLtZG9Q16h-9VgqYKWdQ}{2qpdZcK8Q169lkrRn59OeQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}
   │ info [o.e.c.c.CoordinationState] [ftr] cluster UUID set to [xbZ03-TGT5WwTdGSpiRfcg]
   │ info [o.e.c.s.ClusterApplierService] [ftr] master node changed {previous [], current [{ftr}{FvYLtZG9Q16h-9VgqYKWdQ}{2qpdZcK8Q169lkrRn59OeQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
   │ info [o.e.h.AbstractHttpServerTransport] [ftr] publish_address {127.0.0.1:9220}, bound_addresses {[::1]:9220}, {127.0.0.1:9220}
   │ info [o.e.n.Node] [ftr] started
   │ info [o.e.g.GatewayService] [ftr] recovered [0] indices into cluster_state
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-state] for index patterns [.ml-state*]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-stats] for index patterns [.ml-stats-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [data-streams-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [ilm-history] for index patterns [ilm-history-5*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.slm-history] for index patterns [.slm-history-5*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-settings]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.security-7] creating index, cause [api], templates [], shards [1]/[0]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [logs] for index patterns [logs-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [metrics] for index patterns [metrics-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [synthetics] for index patterns [synthetics-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ml-size-based-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [logs]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [metrics]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [synthetics]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [7-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [30-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [180-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [90-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [365-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [watch-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ilm-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [slm-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.deprecation-indexing-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [system_indices_superuser]
   │ info [o.e.l.LicenseService] [ftr] license [5c805a60-89ac-4744-be84-1c0caff68630] mode [trial] - valid
   │ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
   │ info [o.e.x.s.s.SecurityStatusChangeListener] [ftr] Active license is now [TRIAL]; Security is enabled
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [system_indices_superuser]
   │ info starting [kibana] > /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479270603256785/elastic/kibana-pull-request/kibana-build-xpack/bin/kibana --logging.json=false --server.port=5620 --elasticsearch.hosts=http://localhost:9220 --elasticsearch.username=kibana_system --elasticsearch.password=changeme --data.search.aggs.shardDelay.enabled=true --security.showInsecureClusterWarning=false --telemetry.banner=false --telemetry.sendUsageTo=staging --server.maxPayload=1679958 --plugin-path=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479270603256785/elastic/kibana-pull-request/kibana/test/common/fixtures/plugins/newsfeed --newsfeed.service.urlRoot=http://localhost:5620 --newsfeed.service.pathTemplate=/api/_newsfeed-FTS-external-service-simulators/kibana/v{VERSION}.json --logging.appenders.deprecation.type=console --logging.appenders.deprecation.layout.type=json --logging.loggers[0].name=elasticsearch.deprecation --logging.loggers[0].level=all --logging.loggers[0].appenders[0]=deprecation --status.allowAnonymous=true --server.uuid=5b2de169-2785-441b-ae8c-186a1936b17d --xpack.maps.showMapsInspectorAdapter=true --xpack.maps.preserveDrawingBuffer=true --xpack.security.encryptionKey="wuGNaIhoMpk5sO4UBxgr3NyW1sFcLgIf" --xpack.encryptedSavedObjects.encryptionKey="DkdXazszSCYexXqz4YktBGHCRkV6hyNK" --xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled=true --savedObjects.maxImportPayloadBytes=10485760 --xpack.siem.enabled=true --map.proxyElasticMapsServiceInMaps=true --xpack.security.session.idleTimeout=3600000 --telemetry.optIn=true --xpack.fleet.enabled=true --xpack.fleet.agents.pollingRequestTimeout=5000 --xpack.data_enhanced.search.sessions.enabled=true --xpack.data_enhanced.search.sessions.notTouchedTimeout=15s --xpack.data_enhanced.search.sessions.trackingInterval=5s --xpack.data_enhanced.search.sessions.cleanupInterval=5s --xpack.ruleRegistry.write.enabled=true --xpack.reporting.capture.networkPolicy.rules=[{"allow":true,"protocol":"http:"},{"allow":false,"host":"via.placeholder.com"},{"allow":true,"protocol":"https:"},{"allow":true,"protocol":"data:"},{"allow":false}] --xpack.reporting.capture.maxAttempts=1 --xpack.reporting.csv.maxSizeBytes=6000 --xpack.reporting.roles.enabled=false
   │ proc [kibana] Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/7.17/production.html#openssl-legacy-provider
   │ proc [kibana]   log   [23:36:30.366] [info][plugins-service] Plugin "metricsEntities" is disabled.
   │ proc [kibana]   log   [23:36:30.449] [info][server][Preboot][http] http server running at http://localhost:5620
   │ proc [kibana]   log   [23:36:30.497] [warning][config][deprecation] Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format.
   │ proc [kibana]   log   [23:36:30.497] [warning][config][deprecation] Configuring "xpack.fleet.enabled" is deprecated and will be removed in 8.0.0.
   │ proc [kibana]   log   [23:36:30.498] [warning][config][deprecation] You no longer need to configure "xpack.fleet.agents.pollingRequestTimeout".
   │ proc [kibana]   log   [23:36:30.498] [warning][config][deprecation] map.proxyElasticMapsServiceInMaps is deprecated and is no longer used
   │ proc [kibana]   log   [23:36:30.499] [warning][config][deprecation] Setting "security.showInsecureClusterWarning" has been replaced by "xpack.security.showInsecureClusterWarning"
   │ proc [kibana]   log   [23:36:30.499] [warning][config][deprecation] Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout.
   │ proc [kibana]   log   [23:36:30.500] [warning][config][deprecation] Setting "xpack.siem.enabled" has been replaced by "xpack.securitySolution.enabled"
   │ proc [kibana]   log   [23:36:30.638] [info][plugins-system][standard] Setting up [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
   │ proc [kibana]   log   [23:36:30.656] [info][plugins][taskManager] TaskManager is identified by the Kibana UUID: 5b2de169-2785-441b-ae8c-186a1936b17d
   │ proc [kibana]   log   [23:36:30.772] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
   │ proc [kibana]   log   [23:36:30.792] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
   │ proc [kibana]   log   [23:36:30.808] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
   │ proc [kibana]   log   [23:36:30.830] [info][encryptedSavedObjects][plugins] Hashed 'xpack.encryptedSavedObjects.encryptionKey' for this instance: nnkvE7kjGgidcjXzmLYBbIh4THhRWI1/7fUjAEaJWug=
   │ proc [kibana]   log   [23:36:30.867] [info][plugins][ruleRegistry] Installing common resources shared between all indices
   │ proc [kibana]   log   [23:36:31.328] [info][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
   │ proc [kibana]   log   [23:36:31.588] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
   │ proc [kibana]   log   [23:36:31.589] [info][savedobjects-service] Starting saved objects migrations
   │ proc [kibana]   log   [23:36:31.638] [info][savedobjects-service] [.kibana_task_manager] INIT -> CREATE_NEW_TARGET. took: 12ms.
   │ proc [kibana]   log   [23:36:31.644] [info][savedobjects-service] [.kibana] INIT -> CREATE_NEW_TARGET. took: 22ms.
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_task_manager_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_task_manager_7.17.23_001]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_7.17.23_001]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_task_manager_7.17.23_001][0], [.kibana_7.17.23_001][0]]]).
   │ proc [kibana]   log   [23:36:31.879] [info][savedobjects-service] [.kibana_task_manager] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 241ms.
   │ proc [kibana]   log   [23:36:31.883] [info][savedobjects-service] [.kibana] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 239ms.
   │ proc [kibana]   log   [23:36:32.004] [info][savedobjects-service] [.kibana_task_manager] MARK_VERSION_INDEX_READY -> DONE. took: 125ms.
   │ proc [kibana]   log   [23:36:32.005] [info][savedobjects-service] [.kibana_task_manager] Migration completed after 379ms
   │ proc [kibana]   log   [23:36:32.035] [info][savedobjects-service] [.kibana] MARK_VERSION_INDEX_READY -> DONE. took: 152ms.
   │ proc [kibana]   log   [23:36:32.036] [info][savedobjects-service] [.kibana] Migration completed after 414ms
   │ proc [kibana]   log   [23:36:32.043] [info][plugins-system][standard] Starting [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
   │ proc [kibana]   log   [23:36:33.076] [info][monitoring][monitoring][plugins] config sourced from: production cluster
   │ proc [kibana]   log   [23:36:34.565] [info][server][Kibana][http] http server running at http://localhost:5620
   │ proc [kibana]   log   [23:36:34.651] [info][status] Kibana is now degraded
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-technical-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-ecs-mappings]
   │ proc [kibana]   log   [23:36:34.795] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-agent-configuration] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-agent-configuration]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-custom-link] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-custom-link]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.apm-agent-configuration][0], [.apm-custom-link][0]]]).
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana_security_session_index_template_1] for index patterns [.kibana_security_session_1]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_task_manager_7.17.23_001/tqO4FNPZQ0uKOMmw8nSuCQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/Ms4px46KT8CNs_5hOqbJ2w] update_mapping [_doc]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_security_session_1] creating index, cause [api], templates [.kibana_security_session_index_template_1], shards [1]/[0]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]]).
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/Ms4px46KT8CNs_5hOqbJ2w] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/Ms4px46KT8CNs_5hOqbJ2w] update_mapping [_doc]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.alerts-ilm-policy]
   │ proc [kibana]   log   [23:36:36.191] [info][plugins][ruleRegistry] Installed common resources shared between all indices
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-event-log-policy]
   │ proc [kibana]   log   [23:36:36.195] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.uptime.alerts
   │ proc [kibana]   log   [23:36:36.196] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.logs.alerts
   │ proc [kibana]   log   [23:36:36.197] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.metrics.alerts
   │ proc [kibana]   log   [23:36:36.198] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.apm.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.uptime.alerts-mappings]
   │ proc [kibana]   log   [23:36:36.316] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.logs.alerts-mappings]
   │ proc [kibana]   log   [23:36:36.387] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.metrics.alerts-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.apm.alerts-mappings]
   │ proc [kibana]   log   [23:36:36.437] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
   │ proc [kibana]   log   [23:36:36.489] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana-event-log-7.17.23-snapshot-template] for index patterns [.kibana-event-log-7.17.23-snapshot-*]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana-event-log-7.17.23-snapshot-000001] creating index, cause [api], templates [.kibana-event-log-7.17.23-snapshot-template], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana-event-log-7.17.23-snapshot-000001]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.17.23-snapshot-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
   │ proc [kibana]   log   [23:36:37.103] [info][plugins][securitySolution] Dependent plugin setup complete - Starting ManifestTask
   │ proc [kibana]   log   [23:36:37.342] [info][chromium][plugins][reporting] Browser executable: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720479270603256785/elastic/kibana-pull-request/kibana-build-xpack/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell
   │ proc [kibana]   log   [23:36:37.377] [info][plugins][reporting][store] Creating ILM policy for managing reporting indices: kibana-reporting
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-reporting]
   │ proc [kibana]   log   [23:36:37.920] [info][0][1][endpoint:metadata-check-transforms-task:0][plugins][securitySolution] no endpoint metadata transforms found
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/Ms4px46KT8CNs_5hOqbJ2w] update_mapping [_doc]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.ds-ilm-history-5-2024.07.08-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
   │ info [o.e.c.m.MetadataCreateDataStreamService] [ftr] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2024.07.08-000001], backing indices [], and aliases []
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.08-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2024.07.08-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.08-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.08-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
   │ proc [kibana]   log   [23:36:42.245] [info][status] Kibana is now available (was degraded)
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup2' ]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] updated role [system_indices_superuser]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] updated user [system_indices_superuser]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_logstash_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_canvas_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_discover_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_dashboard_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_discover_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_visualize_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_visualize_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_dashboard_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_maps_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_maps_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoshape_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [antimeridian_points_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [antimeridian_shapes_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [meta_for_geoshape_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoconnections_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_logs_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoall_data_writer]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_index_pattern_management_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_devtools_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_ccr_role]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_upgrade_assistant_role]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_rollups_role]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_rollup_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_api_keys]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_security]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [ccr_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_ilm]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [index_management_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [snapshot_restore_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [ingest_pipelines_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [license_management_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [logstash_read_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [remote_clusters_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_alerts_logs_all_else_read]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [test_user]
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup2' ]
   │ info Starting tests
   │ warn debug logs are being captured, only error logs will be written to the console
   │
     └-: Reporting APIs
       └-> "before all" hook: beforeTestSuite.trigger in "Reporting APIs"
       └-> "before all" hook in "Reporting APIs"
       └-: BWC report generation urls
         └-> "before all" hook: beforeTestSuite.trigger in "BWC report generation urls"
         └-> "before all" hook in "BWC report generation urls"
         └-: Pre 6_2
           └-> job posted successfully
         └-: 6_2
           └-> "before all" hook: beforeTestSuite.trigger for "multiple jobs posted"
           └-> multiple jobs posted
             └-> "before each" hook: global before each for "multiple jobs posted"
             │ proc [kibana] {"ecs":{"version":"1.12.0"},"@timestamp":"2024-07-08T23:37:09.055+00:00","message":"Elasticsearch deprecation: 299 Elasticsearch-7.17.23-SNAPSHOT-42b93a534929add031e668becc4565463f2c4b32 \"[script][1:509] Deprecated field [inline] used, expected [source] instead\"\nOrigin:kibana\nQuery:\n200 - 561.0B\nPOST /animals-*/_async_search?batched_reduce_size=64&wait_for_completion_timeout=100ms&keep_on_completion=true&keep_alive=604800000ms&ignore_unavailable=true&track_total_hits=true&enable_fields_emulation=true&preference=1720481826428\n{\"aggs\":{\"2\":{\"terms\":{\"field\":\"weightLbs\",\"order\":{\"_count\":\"desc\"},\"size\":5}}},\"size\":0,\"script_fields\":{\"isDog\":{\"script\":{\"source\":\"return doc['animal.keyword'].value == 'dog'\",\"lang\":\"painless\"}}},\"stored_fields\":[\"*\"],\"runtime_mappings\":{},\"query\":{\"bool\":{\"must\":[{\"query_string\":{\"query\":\"weightLbs:>15\",\"analyze_wildcard\":true,\"time_zone\":\"America/New_York\"}},{\"query_string\":{\"query\":\"weightLbs:>10\",\"analyze_wildcard\":true,\"time_zone\":\"America/New_York\"}}],\"filter\":[{\"script\":{\"script\":{\"inline\":\"boolean compare(Supplier s, def v) {return s.get() == v;}compare(() -> { return doc['animal.keyword'].value == 'dog' }, params.value);\",\"lang\":\"painless\",\"params\":{\"value\":true}}}},{\"range\":{\"@timestamp\":{\"format\":\"strict_date_optional_time\",\"gte\":\"2018-04-09T21:56:08.000Z\",\"lte\":\"2018-04-11T21:56:08.000Z\"}}}],\"should\":[],\"must_not\":[]}}}","log":{"level":"DEBUG","logger":"elasticsearch.deprecation"},"process":{"pid":9180}}
             │ proc [kibana] {"ecs":{"version":"1.12.0"},"@timestamp":"2024-07-08T23:37:09.070+00:00","message":"Elasticsearch deprecation: 299 Elasticsearch-7.17.23-SNAPSHOT-42b93a534929add031e668becc4565463f2c4b32 \"[script][1:496] Deprecated field [inline] used, expected [source] instead\"\nOrigin:kibana\nQuery:\n200 - 969.0B\nPOST /animals-*/_async_search?batched_reduce_size=64&wait_for_completion_timeout=100ms&keep_on_completion=true&keep_alive=604800000ms&ignore_unavailable=true&track_total_hits=true&enable_fields_emulation=true&preference=1720481826428\n{\"aggs\":{\"2\":{\"terms\":{\"field\":\"weightLbs\",\"order\":{\"_count\":\"desc\"},\"size\":5},\"aggs\":{\"3\":{\"terms\":{\"field\":\"animal.keyword\",\"order\":{\"_count\":\"desc\"},\"size\":5}}}}},\"size\":0,\"script_fields\":{\"isDog\":{\"script\":{\"source\":\"return doc['animal.keyword'].value == 'dog'\",\"lang\":\"painless\"}}},\"stored_fields\":[\"*\"],\"runtime_mappings\":{},\"query\":{\"bool\":{\"must\":[{\"query_string\":{\"query\":\"weightLbs:>15\",\"analyze_wildcard\":true,\"time_zone\":\"America/New_York\"}}],\"filter\":[{\"script\":{\"script\":{\"inline\":\"boolean compare(Supplier s, def v) {return s.get() == v;}compare(() -> { return doc['animal.keyword'].value == 'dog' }, params.value);\",\"lang\":\"painless\",\"params\":{\"value\":true}}}},{\"range\":{\"@timestamp\":{\"format\":\"strict_date_optional_time\",\"gte\":\"2018-04-09T21:56:08.000Z\",\"lte\":\"2018-04-11T21:56:08.000Z\"}}}],\"should\":[],\"must_not\":[]}}}","log":{"level":"DEBUG","logger":"elasticsearch.deprecation"},"process":{"pid":9180}}
             └- ✓ pass  (18.4s)
           └-> "after all" hook: afterTestSuite.trigger for "multiple jobs posted"
         └-> "after all" hook: afterTestSuite.trigger in "BWC report generation urls"
       └-: BWC report generation into existing indexes
         └-> "before all" hook: beforeTestSuite.trigger in "BWC report generation into existing indexes"
         └-: existing 6_2 index
           └-> "before all" hook: beforeTestSuite.trigger for "single job posted can complete in an index created with an older version"
           └-> "before all" hook: load data and add index alias for "single job posted can complete in an index created with an older version"
           └-> single job posted can complete in an index created with an older version
             └-> "before each" hook: global before each for "single job posted can complete in an index created with an older version"
             └- ✓ pass  (3.1s)
           └-> "after all" hook: remove index alias for "single job posted can complete in an index created with an older version"
           └-> "after all" hook: afterTestSuite.trigger for "single job posted can complete in an index created with an older version"
         └-> "after all" hook: afterTestSuite.trigger in "BWC report generation into existing indexes"
       └-: Security Roles and Privileges for Applications
         └-> "before all" hook: beforeTestSuite.trigger in "Security Roles and Privileges for Applications"
         └-> "before all" hook in "Security Roles and Privileges for Applications"
         └-: Dashboard: CSV download file
           └-> "before all" hook: beforeTestSuite.trigger for "does not allow user that does not have the role-based privilege"
           └-> does not allow user that does not have the role-based privilege
             └-> "before each" hook: global before each for "does not allow user that does not have the role-based privilege"
             └- ✓ pass  (1.0s)
           └-> does allow user with the role privilege
             └-> "before each" hook: global before each for "does allow user with the role privilege"
             └- ✓ pass  (366ms)
           └-> "after all" hook: afterTestSuite.trigger for "does allow user with the role privilege"
         └-: Dashboard: Generate PDF report
           └-> "before all" hook: beforeTestSuite.trigger for "does not allow user that does not have the role-based privilege"
           └-> does not allow user that does not have the role-based privilege
             └-> "before each" hook: global before each for "does not allow user that does not have the role-based privilege"
             └- ✓ pass  (29ms)
           └-> does allow user with the role-based privilege
             └-> "before each" hook: global before each for "does allow user with the role-based privilege"
             └- ✓ pass  (44ms)
           └-> "after all" hook: afterTestSuite.trigger for "does allow user with the role-based privilege"
         └-: Visualize: Generate PDF report
           └-> "before all" hook: beforeTestSuite.trigger for "does not allow user that does not have the role-based privilege"
           └-> does not allow user that does not have the role-based privilege
             └-> "before each" hook: global before each for "does not allow user that does not have the role-based privilege"
             └- ✓ pass  (26ms)
           └-> does allow user with the role-based privilege
             └-> "before each" hook: global before each for "does allow user with the role-based privilege"
             └- ✓ pass  (43ms)
           └-> "after all" hook: afterTestSuite.trigger for "does allow user with the role-based privilege"
         └-: Canvas: Generate PDF report
           └-> "before all" hook: beforeTestSuite.trigger for "does not allow user that does not have the role-based privilege"
           └-> does not allow user that does not have the role-based privilege
             └-> "before each" hook: global before each for "does not allow user that does not have the role-based privilege"
             └- ✓ pass  (28ms)
           └-> does allow user with the role-based privilege
             └-> "before each" hook: global before each for "does allow user with the role-based privilege"
             └- ✓ pass  (50ms)
           └-> "after all" hook: afterTestSuite.trigger for "does allow user with the role-based privilege"
         └-: Discover: Generate CSV report
           └-> "before all" hook: beforeTestSuite.trigger for "does not allow user that does not have the role-based privilege"
           └-> does not allow user that does not have the role-based privilege
             └-> "before each" hook: global before each for "does not allow user that does not have the role-based privilege"
             └- ✓ pass  (33ms)
           └-> does allow user with the role-based privilege
             └-> "before each" hook: global before each for "does allow user with the role-based privilege"
             └- ✓ pass  (47ms)
           └-> "after all" hook: afterTestSuite.trigger for "does allow user with the role-based privilege"
         └-> "after all" hook in "Security Roles and Privileges for Applications"
         └-> "after all" hook: afterTestSuite.trigger in "Security Roles and Privileges for Applications"
       └-: CSV Generation from SearchSource
         └-> "before all" hook: beforeTestSuite.trigger in "CSV Generation from SearchSource"
         └-> "before all" hook in "CSV Generation from SearchSource"
         └-: Exports CSV with almost all fields when using fieldsFromSource
           └-> "before all" hook: beforeTestSuite.trigger for "(ES 7)"
           └-> "before all" hook for "(ES 7)"
           └-> (ES 7)
             └-> "before each" hook: global before each for "(ES 7)"
             └- ✓ pass  (3ms)
           └-> (ES 8)
           └-> "after all" hook: afterTestSuite.trigger for "(ES 8)"
         └-: Exports CSV with all fields when using defaults
           └-> "before all" hook: beforeTestSuite.trigger for "(ES 7)"
           └-> "before all" hook for "(ES 7)"
           └-> (ES 7)
             └-> "before each" hook: global before each for "(ES 7)"
             └- ✓ pass  (0ms)
           └-> (ES 8)
           └-> "after all" hook: afterTestSuite.trigger for "(ES 8)"
         └-: date formatting
           └-> "before all" hook: beforeTestSuite.trigger for "With filters and timebased data, default to UTC"
           └-> "before all" hook for "With filters and timebased data, default to UTC"
           └-> With filters and timebased data, default to UTC
             └-> "before each" hook: global before each for "With filters and timebased data, default to UTC"
             └- ✓ pass  (129ms)
           └-> With filters and timebased data, non-default timezone
             └-> "before each" hook: global before each for "With filters and timebased data, non-default timezone"
             └- ✓ pass  (127ms)
           └-> Formatted date_nanos data, UTC timezone
             └-> "before each" hook: global before each for "Formatted date_nanos data, UTC timezone"
             └- ✓ pass  (2.9s)
           └-> Formatted date_nanos data, custom timezone (New York)
             └-> "before each" hook: global before each for "Formatted date_nanos data, custom timezone (New York)"
             └- ✓ pass  (2.8s)
           └-> "after all" hook for "Formatted date_nanos data, custom timezone (New York)"
           └-> "after all" hook: afterTestSuite.trigger for "Formatted date_nanos data, custom timezone (New York)"
         └-: non-timebased
           └-> "before all" hook: beforeTestSuite.trigger for "Handle _id and _index columns"
           └-> Handle _id and _index columns
             └-> "before each" hook: global before each for "Handle _id and _index columns"
             └- ✓ pass  (2.8s)
           └-> With filters and non-timebased data
             └-> "before each" hook: global before each for "With filters and non-timebased data"
             └- ✓ pass  (3.1s)
           └-> "after all" hook: afterTestSuite.trigger for "With filters and non-timebased data"
         └-: validation
           └-> "before all" hook: beforeTestSuite.trigger for "Return a 404"
           └-> Return a 404
             └-> "before each" hook: global before each for "Return a 404"
             └- ✓ pass  (933ms)
           └-: Searches large amount of data, stops at Max Size Reached
             └-> "before all" hook: beforeTestSuite.trigger for "(ES 7)"
             └-> "before all" hook for "(ES 7)"
             └-> (ES 7)
               └-> "before each" hook: global before each for "(ES 7)"
               └- ✓ pass  (1ms)
             └-> (ES 8)
             └-> "after all" hook: afterTestSuite.trigger for "(ES 8)"
           └-> "after all" hook: afterTestSuite.trigger for "Return a 404"
         └-: _id field is a big integer, passes through the value without mutation
           └-> "before all" hook: beforeTestSuite.trigger for "(ES 7)"
           └-> "before all" hook for "(ES 7)"
           └-> (ES 7)
             └-> "before each" hook: global before each for "(ES 7)"
             └- ✓ pass  (0ms)
           └-> (ES 8)
           └-> "after all" hook for "(ES 8)"
           └-> "after all" hook: afterTestSuite.trigger for "(ES 8)"
         └-> "after all" hook in "CSV Generation from SearchSource"
         └-> "after all" hook: afterTestSuite.trigger in "CSV Generation from SearchSource"
       └-: Generate CSV from SearchSource
         └-> "before all" hook: beforeTestSuite.trigger for "exported CSV file matches snapshot (7.17)"
         └-> "before all" hook for "exported CSV file matches snapshot (7.17)"
         └-> exported CSV file matches snapshot (7.17)
           └-> "before each" hook: global before each for "exported CSV file matches snapshot (7.17)"
           └- ✓ pass  (1ms)
         └-> exported CSV file matches snapshot (8.0)
         └-> "after all" hook for "exported CSV file matches snapshot (8.0)"
         └-> "after all" hook: afterTestSuite.trigger for "exported CSV file matches snapshot (8.0)"
       └-: Generation from Legacy Job Params
         └-> "before all" hook: beforeTestSuite.trigger for "Rejects bogus jobParams"
         └-> "before all" hook for "Rejects bogus jobParams"
         └-> Rejects bogus jobParams
           └-> "before each" hook: global before each for "Rejects bogus jobParams"
           └- ✓ pass  (33ms)
         └-> Rejects empty jobParams
           └-> "before each" hook: global before each for "Rejects empty jobParams"
           └- ✓ pass  (26ms)
         └-> Accepts jobParams in POST payload
           └-> "before each" hook: global before each for "Accepts jobParams in POST payload"
           └- ✓ pass  (66ms)
         └-> Accepts jobParams in query string
           └-> "before each" hook: global before each for "Accepts jobParams in query string"
           └- ✓ pass  (38ms)
         └-> "after all" hook for "Accepts jobParams in query string"
         └-> "after all" hook: afterTestSuite.trigger for "Accepts jobParams in query string"
       └-: CSV Generation from Saved Search ID
         └-> "before all" hook: beforeTestSuite.trigger in "CSV Generation from Saved Search ID"
         └-> "before all" hook in "CSV Generation from Saved Search ID"
         └-: export from timebased data view
           └-> "before all" hook: beforeTestSuite.trigger in "export from timebased data view"
           └-> "before all" hook in "export from timebased data view"
           └-: export with no saved filters and no job post params
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (4ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (2ms)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: export with saved date filter and no job post params
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (2ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (0ms)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: export with no selected columns and saved date filter and no job post params
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (4ms)
             └-> csv file matches (7.17)
               └-> "before each" hook: global before each for "csv file matches (7.17)"
               └- ✓ pass  (1ms)
             └-> csv file matches (8)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches (8)"
           └-: export with saved date and terms filters and no job post params
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (3ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (0ms)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: export with saved filters and job post params
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (3ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (0ms)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: export with saved filters, job params timerange filter, and query from unsaved state
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (3ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (1ms)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: export with no saved filters and job post params
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (3ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (0ms)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: export with no saved filters and job post params with custom time zone
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (4ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (0ms)
             └-> "after all" hook for "csv file matches"
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: export with "doc_table:hideTimeColumn" = "On"
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (3ms)
             └-> csv file matches
               └-> "before each" hook: global before each for "csv file matches"
               └- ✓ pass  (0ms)
             └-> "after all" hook for "csv file matches"
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches"
           └-: validation
             └-> "before all" hook: beforeTestSuite.trigger for "with saved search 404"
             └-> with saved search 404
               └-> "before each" hook: global before each for "with saved search 404"
               └- ✓ pass  (30ms)
             └-> with invalid min time range
               └-> "before each" hook: global before each for "with invalid min time range"
               └- ✓ pass  (30ms)
             └-> with invalid max time range
               └-> "before each" hook: global before each for "with invalid max time range"
               └- ✓ pass  (25ms)
             └-> "after all" hook: afterTestSuite.trigger for "with invalid max time range"
           └-> "after all" hook in "export from timebased data view"
           └-> "after all" hook: afterTestSuite.trigger in "export from timebased data view"
         └-: export from non-timebased data view
           └-> "before all" hook: beforeTestSuite.trigger in "export from non-timebased data view"
           └-> "before all" hook in "export from non-timebased data view"
           └-: with plain saved search
             └-> "before all" hook: beforeTestSuite.trigger for "job response data is correct"
             └-> "before all" hook for "job response data is correct"
             └-> job response data is correct
               └-> "before each" hook: global before each for "job response data is correct"
               └- ✓ pass  (4ms)
             └-> csv file matches (7.17)
               └-> "before each" hook: global before each for "csv file matches (7.17)"
               └- ✓ pass  (0ms)
             └-> csv file matches (8)
             └-> "after all" hook: afterTestSuite.trigger for "csv file matches (8)"
           └-> "after all" hook in "export from non-timebased data view"
           └-> "after all" hook: afterTestSuite.trigger in "export from non-timebased data view"
         └-> "after all" hook in "CSV Generation from Saved Search ID"
         └-> "after all" hook: afterTestSuite.trigger in "CSV Generation from Saved Search ID"
       └-: Network Policy
         └-> "before all" hook: beforeTestSuite.trigger for "should fail job when page voilates the network policy"
         └-> "before all" hook for "should fail job when page voilates the network policy"
         └-> should fail job when page voilates the network policy
           └-> "before each" hook: global before each for "should fail job when page voilates the network policy"
           └- ✓ pass  (5.0s)
         └-> "after all" hook for "should fail job when page voilates the network policy"
         └-> "after all" hook: afterTestSuite.trigger for "should fail job when page voilates the network policy"
       └-: Exports and Spaces
         └-> "before all" hook: beforeTestSuite.trigger for "should complete a job of PNG export of a dashboard in non-default space"
         └-> "before all" hook for "should complete a job of PNG export of a dashboard in non-default space"
         └-> should complete a job of PNG export of a dashboard in non-default space
           └-> "before each" hook: global before each for "should complete a job of PNG export of a dashboard in non-default space"
           └- ✓ pass  (10.1s)
         └-> should complete a job of PDF export of a dashboard in non-default space
           └-> "before each" hook: global before each for "should complete a job of PDF export of a dashboard in non-default space"
           └- ✓ pass  (8.1s)
         └-: CSV saved search export
           └-> "before all" hook: beforeTestSuite.trigger for "should use formats from the default space"
           └-> should use formats from the default space
             └-> "before each" hook: global before each for "should use formats from the default space"
             └- ✓ pass  (2.1s)
           └-> should use formats from non-default spaces
             └-> "before each" hook: global before each for "should use formats from non-default spaces"
             └- ✓ pass  (4.7s)
           └-> should use browserTimezone in jobParams for date formatting
             └-> "before each" hook: global before each for "should use browserTimezone in jobParams for date formatting"
             └- ✓ pass  (5.0s)
           └-> should default to UTC for date formatting when timezone is not known
             └-> "before each" hook: global before each for "should default to UTC for date formatting when timezone is not known"
             └- ✓ pass  (4.1s)
           └-> "after all" hook: afterTestSuite.trigger for "should default to UTC for date formatting when timezone is not known"
         └-> "after all" hook for "should complete a job of PDF export of a dashboard in non-default space"
         └-> "after all" hook: afterTestSuite.trigger for "should complete a job of PDF export of a dashboard in non-default space"
       └-: Usage
         └-> "before all" hook: beforeTestSuite.trigger in "Usage"
         └-> "before all" hook in "Usage"
         └-: initial state
           └-> "before all" hook: beforeTestSuite.trigger for "shows reporting as available and enabled"
           └-> "before all" hook for "shows reporting as available and enabled"
           └-> shows reporting as available and enabled
             └-> "before each" hook: global before each for "shows reporting as available and enabled"
             └- ✓ pass  (1ms)
           └-> "after each" hook for "shows reporting as available and enabled"
           └-> all counts are 0
             └-> "before each" hook: global before each for "all counts are 0"
             └- ✓ pass  (4ms)
           └-> "after each" hook for "all counts are 0"
           └-> "after all" hook: afterTestSuite.trigger for "all counts are 0"
         └-: from archive data
           └-> "before all" hook: beforeTestSuite.trigger for "generated from 6.2"
           └-> generated from 6.2
             └-> "before each" hook: global before each for "generated from 6.2"
             └- ✓ pass  (1.1s)
           └-> "after each" hook for "generated from 6.2"
           └-> generated from 6.3
             └-> "before each" hook: global before each for "generated from 6.3"
             └- ✓ pass  (2.4s)
           └-> "after each" hook for "generated from 6.3"
           └-> "after all" hook: afterTestSuite.trigger for "generated from 6.3"
         └-: from new jobs posted
           └-> "before all" hook: beforeTestSuite.trigger for "should handle csv"
           └-> should handle csv
             └-> "before each" hook: global before each for "should handle csv"
             └- ✓ pass  (3.5s)
           └-> "after each" hook for "should handle csv"
           └-> should handle preserve_layout pdf
             └-> "before each" hook: global before each for "should handle preserve_layout pdf"
             └- ✓ pass  (14.1s)
           └-> "after each" hook for "should handle preserve_layout pdf"
           └-> should handle print_layout pdf
             └-> "before each" hook: global before each for "should handle print_layout pdf"
             └- ✓ pass  (16.1s)
           └-> "after each" hook for "should handle print_layout pdf"
           └-> "after all" hook: afterTestSuite.trigger for "should handle print_layout pdf"
         └-> "after all" hook in "Usage"
         └-> "after all" hook: afterTestSuite.trigger in "Usage"
       └-: ILM policy migration APIs
         └-> "before all" hook: beforeTestSuite.trigger for "detects when no migration is needed"
         └-> "before all" hook for "detects when no migration is needed"
         └-> detects when no migration is needed
           └-> "before each" hook: global before each for "detects when no migration is needed"
           └- ✓ pass  (67ms)
         └-> "after each" hook for "detects when no migration is needed"
         └-> detects when reporting indices should be migrated due to missing ILM policy
           └-> "before each" hook: global before each for "detects when reporting indices should be migrated due to missing ILM policy"
           └- ✓ pass  (173ms)
         └-> "after each" hook for "detects when reporting indices should be migrated due to missing ILM policy"
         └-> detects when reporting indices should be migrated due to unmanaged indices
           └-> "before each" hook: global before each for "detects when reporting indices should be migrated due to unmanaged indices"
           └- ✓ pass  (122ms)
         └-> "after each" hook for "detects when reporting indices should be migrated due to unmanaged indices"
         └-> does not override an existing ILM policy
           └-> "before each" hook: global before each for "does not override an existing ILM policy"
           └- ✓ pass  (57ms)
         └-> "after each" hook for "does not override an existing ILM policy"
         └-> is not available to unauthorized users
           └-> "before each" hook: global before each for "is not available to unauthorized users"
           └- ✓ pass  (228ms)
         └-> "after each" hook for "is not available to unauthorized users"
         └-> "after all" hook for "is not available to unauthorized users"
         └-> "after all" hook: afterTestSuite.trigger for "is not available to unauthorized users"
       └-: Frozen indices search
         └-> "before all" hook: beforeTestSuite.trigger for "Search includes frozen indices based on Advanced Setting"
         └-> "before all" hook: reset for "Search includes frozen indices based on Advanced Setting"
         └-> Search includes frozen indices based on Advanced Setting
           └-> "before each" hook: global before each for "Search includes frozen indices based on Advanced Setting"
           │ proc [kibana] {"ecs":{"version":"1.12.0"},"@timestamp":"2024-07-08T23:41:17.761+00:00","message":"Elasticsearch deprecation: 299 Elasticsearch-7.17.23-SNAPSHOT-42b93a534929add031e668becc4565463f2c4b32 \"[ignore_throttled] parameter is deprecated because frozen indices have been deprecated. Consider cold or frozen tiers in place of frozen indices.\", 299 Elasticsearch-7.17.23-SNAPSHOT-42b93a534929add031e668becc4565463f2c4b32 \"Searching frozen indices [test3] is deprecated. Consider cold or frozen tiers in place of frozen indices. The frozen feature will be removed in a feature release.\"\nOrigin:kibana\nQuery:\n200 - 1.1KB\nPOST /test*/_search?ignore_unavailable=true&track_total_hits=true&enable_fields_emulation=true&timeout=30000ms&scroll=30s&size=500&ignore_throttled=false\n{\"fields\":[{\"field\":\"*\",\"include_unmapped\":\"true\"},{\"field\":\"@timestamp\",\"format\":\"strict_date_optional_time\"}],\"sort\":[{\"@timestamp\":{\"order\":\"desc\",\"unmapped_type\":\"boolean\"}}],\"track_total_hits\":true,\"script_fields\":{},\"stored_fields\":[\"*\"],\"runtime_mappings\":{},\"_source\":false,\"query\":{\"bool\":{\"must\":[],\"filter\":[{\"range\":{\"@timestamp\":{\"format\":\"strict_date_optional_time\",\"gte\":\"2020-08-24T00:00:00.000Z\",\"lte\":\"2022-08-24T21:40:48.346Z\"}}}],\"should\":[],\"must_not\":[]}}}","log":{"level":"DEBUG","logger":"elasticsearch.deprecation"},"process":{"pid":9180}}
           └- ✓ pass  (3.1s)
         └-> "after all" hook: reset for "Search includes frozen indices based on Advanced Setting"
         └-> "after all" hook: afterTestSuite.trigger for "Search includes frozen indices based on Advanced Setting"
       └-> "after all" hook: afterTestSuite.trigger in "Reporting APIs"
   │
   │71 passing (4.0m)
   │8 pending
   │
   │ proc [kibana]   log   [23:41:20.244] [info][plugins-system][standard] Stopping all plugins.
   │ proc [kibana]   log   [23:41:20.245] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
   │ info [kibana] exited with null after 302.7 seconds
   │ info [es] stopping node ftr
   │ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
   │ info [o.e.n.Node] [ftr] stopping ...
   │ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
   │ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
   │ info [o.e.n.Node] [ftr] stopped
   │ info [o.e.n.Node] [ftr] closing ...
   │ info [o.e.n.Node] [ftr] closed
   │ info [es] stopped
   │ info [es] no debug files found, assuming es did not write any
   │ info [es] cleanup complete