Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrationsv2: limit batch sizes to migrations.batchSizeBytes (= 100mb by default) #109540

Merged
merged 14 commits into from
Sep 1, 2021

Conversation

rudolf
Copy link
Contributor

@rudolf rudolf commented Aug 20, 2021

Summary

Fixes #107641

Migrationsv2

  • limit batch sizes to migrations.maxBatchSizeBytes (= 100mb by default)
  • redact the documents kept in the executionLog by only storing the _id to reduce memory consumption and avoid logging document contents when there's an exception

Release notes

Fixes a bug that would cause Kibana upgrade migrations to fail after receiving a 413 Request Entity Too Large response from Elasticsearch if the .kibana* indices contained many large documents.

QA Testing

(These scenarios are already tested in integration tests)

  1. Test that migrations succeed when the ES max_content_length is less than a single batch
    node scripts/es.js snapshot --data-archive=src/core/server/saved_objects/migrationsv2/integration_tests/archives/7.14.0_xpack_sample_saved_objects.zip -E http.max_content_length=1715274
    node scripts/kibana.js --dev --migrations.maxBatchSizeBytes=1715274
    
  2. Migrations will fail when a saved object is larger than `maxBatchSizeBytes
    node scripts/es.js snapshot --data-archive=src/core/server/saved_objects/migrationsv2/integration_tests/archives/7.14.0_xpack_sample_saved_objects.zip -E http.max_content_length=1715274
    node scripts/kibana.js --dev --migrations.maxBatchSizeBytes=1015275
    
  3. Migrations will fail when a saved object is larger than max_content_length
    node scripts/es.js snapshot --data-archive=src/core/server/saved_objects/migrationsv2/integration_tests/archives/7.14.0_xpack_sample_saved_objects.zip -E http.max_content_length=1000000
    node scripts/kibana.js --dev --migrations.maxBatchSizeBytes=1015275
    

Regression testing on cloud

I tested this on cloud by loading src/core/server/saved_objects/migrationsv2/integration_tests/archives/7.7.2_xpack_100k_obj.zip into a deployment and then comparing the migrations duration on 7.14.1 vs 7.15.0 (BC). Because 100k saved objects means we're doing 100 requests in serial, there's quite a lot of variance between runs (30 seconds). But migrations took a similar amount of time with the best case results for both versions being only a few seconds apart.

Looking at monitoring data the event-loop delay only had one small spike up to 8ms so it's unlikely we're CPU bound.

Because of the high memory consumption I had to test this with 2GB instances which have slightly more CPU power than our 1GB instances. So I will repeat this once #111911 is merged.

Checklist

Delete any items that are not applicable to this PR.

  • Documentation was added for features that require explanation or tutorials
  • Unit or functional tests were updated or added to match the most common scenarios
  • If a plugin configuration key changed, check if it needs to be allowlisted in the cloud and added to the docker list

Risk Matrix

Delete this section if it is not applicable to this PR.

Before closing this PR, invite QA, stakeholders, and other developers to identify risks that should be tested prior to the change/feature release.

When forming the risk matrix, consider some of the following examples and how they may potentially impact the change:

Risk Probability Severity Mitigation/Notes
Calculating the size of every document adds a lot of CPU overhead which slows down migrations Low High Tested on a laptop and this doesn't have a noticeable affect on migration duration of 100k docs. We need to repeat this test on Cloud where CPU is much more limited

For maintainers

@rudolf rudolf added Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc project:ResilientSavedObjectMigrations Reduce Kibana upgrade failures by making saved object migrations more resilient labels Aug 20, 2021
@rudolf rudolf marked this pull request as ready for review August 26, 2021 12:47
@rudolf rudolf requested a review from a team as a code owner August 26, 2021 12:47
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-core (Team:Core)

root: {
appenders: ['default', 'file'],
},
loggers: [
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what's the difference, but even though the previous config passed validation, it didn't produce a log file?

@afharo afharo requested a review from a team August 26, 2021 15:44
@mshustov
Copy link
Contributor

ack: going to review on Friday 27.08

*/
const NDJSON_NEW_LINE_BYTES = 1;

const batches = [[]] as [SavedObjectsRawDoc[]];
Copy link
Contributor

@mshustov mshustov Aug 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a batch size doesn't affect siblings 👍

@rudolf rudolf requested a review from a team as a code owner August 31, 2021 13:40
Copy link
Contributor

@tylersmalley tylersmalley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docker change LGTM

@rudolf rudolf requested a review from afharo September 1, 2021 07:51
Copy link
Member

@afharo afharo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for doing this 🧡 !

Comment on lines +28 to +30
beforeAll(async () => {
await removeLogFile();
});
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: Should we also delete the logs afterAll?

Copy link
Contributor

@mshustov mshustov Sep 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I wrote this logic for the first time, I didn't add this logic to be able to investigate test failures with the logs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we actually added the logs for debugging failed tests (and later used them for further integration testing) since these logs won't interfere with other test runs I think it's worth having them lie around so that you can see what had happened

@rudolf rudolf merged commit 393505a into elastic:master Sep 1, 2021
@rudolf rudolf deleted the migrations-dynamic-batch branch September 1, 2021 10:19
@rudolf rudolf added the auto-backport Deprecated - use backport:version if exact versions are needed label Sep 1, 2021
@kibanamachine
Copy link
Contributor

💔 Backport failed

Status Branch Result
7.15 Commit could not be cherrypicked due to conflicts
7.x Commit could not be cherrypicked due to conflicts

To backport manually run:
node scripts/backport --pr 109540

@kibanamachine
Copy link
Contributor

Friendly reminder: Looks like this PR hasn’t been backported yet.
To create backports run node scripts/backport --pr 109540 or prevent reminders by adding the backport:skip label.

@kibanamachine kibanamachine added the backport missing Added to PRs automatically when the are determined to be missing a backport. label Sep 2, 2021
rudolf added a commit to rudolf/kibana that referenced this pull request Sep 2, 2021
… by default) (elastic#109540)

* Fix logging for existing integration test

* First stab at limiting batches to batchSizeBytes

* Fix tests

* Fix batch size calculation, NDJSON needs to be terminated by an empty line

* Integration tests

* Fix type failures

* rename migration integration tests and log files to be consistent & more descriptive

* Review feedback

* Remove duplication of fatal error reasons

* migrations.maxBatchSizeBytes to docker environment vars

* docs for migrations.maxBatchSizeBytes
# Conflicts:
#	src/core/server/saved_objects/migrationsv2/integration_tests/7_13_0_unknown_types.test.ts
#	src/core/server/saved_objects/migrationsv2/integration_tests/migration_from_v1.test.ts
rudolf added a commit to rudolf/kibana that referenced this pull request Sep 2, 2021
… by default) (elastic#109540)

* Fix logging for existing integration test

* First stab at limiting batches to batchSizeBytes

* Fix tests

* Fix batch size calculation, NDJSON needs to be terminated by an empty line

* Integration tests

* Fix type failures

* rename migration integration tests and log files to be consistent & more descriptive

* Review feedback

* Remove duplication of fatal error reasons

* migrations.maxBatchSizeBytes to docker environment vars

* docs for migrations.maxBatchSizeBytes
# Conflicts:
#	src/core/server/saved_objects/migrationsv2/integration_tests/7_13_0_unknown_types.test.ts
#	src/core/server/saved_objects/migrationsv2/integration_tests/migration_from_v1.test.ts
@kibanamachine
Copy link
Contributor

Looks like this PR has backport PRs but they still haven't been merged. Please merge them ASAP to keep the branches relatively in sync.

rudolf added a commit that referenced this pull request Sep 3, 2021
… 100mb by default) (#109540) (#110967)

* Migrationsv2: limit batch sizes to migrations.batchSizeBytes (= 100mb by default) (#109540)

* Fix logging for existing integration test

* First stab at limiting batches to batchSizeBytes

* Fix tests

* Fix batch size calculation, NDJSON needs to be terminated by an empty line

* Integration tests

* Fix type failures

* rename migration integration tests and log files to be consistent & more descriptive

* Review feedback

* Remove duplication of fatal error reasons

* migrations.maxBatchSizeBytes to docker environment vars

* docs for migrations.maxBatchSizeBytes
# Conflicts:
#	src/core/server/saved_objects/migrationsv2/integration_tests/7_13_0_unknown_types.test.ts
#	src/core/server/saved_objects/migrationsv2/integration_tests/migration_from_v1.test.ts

* Fix tests on 7.x being off by one byte

Co-authored-by: Kibana Machine <[email protected]>
@kibanamachine
Copy link
Contributor

Looks like this PR has backport PRs but they still haven't been merged. Please merge them ASAP to keep the branches relatively in sync.

@kibanamachine kibanamachine removed the backport missing Added to PRs automatically when the are determined to be missing a backport. label Sep 6, 2021
rudolf added a commit that referenced this pull request Sep 6, 2021
…= 100mb by default) (#109540) (#110968)

* Migrationsv2: limit batch sizes to migrations.batchSizeBytes (= 100mb by default) (#109540)

* Fix logging for existing integration test

* First stab at limiting batches to batchSizeBytes

* Fix tests

* Fix batch size calculation, NDJSON needs to be terminated by an empty line

* Integration tests

* Fix type failures

* rename migration integration tests and log files to be consistent & more descriptive

* Review feedback

* Remove duplication of fatal error reasons

* migrations.maxBatchSizeBytes to docker environment vars

* docs for migrations.maxBatchSizeBytes
# Conflicts:
#	src/core/server/saved_objects/migrationsv2/integration_tests/7_13_0_unknown_types.test.ts
#	src/core/server/saved_objects/migrationsv2/integration_tests/migration_from_v1.test.ts

* Fix tests on 7.x being off by one byte

Co-authored-by: Kibana Machine <[email protected]>
@bhavyarm
Copy link
Contributor

@rudolf how do we test this PR? Can we get access to data you used for testing it? Thanks!

@rudolf
Copy link
Contributor Author

rudolf commented Sep 14, 2021

@bhavyarm I've added a "QA testing" section, but these steps are already exercised by the integration tests, so I shared some real world saved objects over slack.

@kibanamachine
Copy link
Contributor

kibanamachine commented Sep 14, 2021

💔 Build Failed

Failed CI Steps


Test Failures

Kibana Pipeline / general / X-Pack API Integration Tests.x-pack/test/api_integration/apis/uptime/rest/telemetry_collectors_fleet·ts.apis uptime uptime REST endpoints with generated data telemetry collectors fleet should receive expected results for fleet managed monitors after calling monitor logging

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 1 times on tracked branches: https://dryrun

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook in "apis"
[00:06:21]           └-: uptime
[00:06:21]             └-> "before all" hook in "uptime"
[00:06:21]             └-> "before all" hook in "uptime"
[00:06:21]               │ debg No indices to delete [pattern=heartbeat*]
[00:06:26]             └-: uptime REST endpoints
[00:06:26]               └-> "before all" hook in "uptime REST endpoints"
[00:06:26]               └-: with generated data
[00:06:26]                 └-> "before all" hook in "with generated data"
[00:06:45]                 └-: telemetry collectors fleet
[00:06:45]                   │ info [x-pack/test/functional/es_archives/uptime/blank] Unloading indices from "mappings.json"
[00:06:45]                   └-> "before all" hook for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:06:45]                   └-> "before all" hook: generating data for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:06:45]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "mappings.json"
[00:06:45]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "data.json"
[00:06:45]                     │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [heartbeat-8-generated-test/dDyWl_JkRFiZuA8mhyU3bA] deleting index
[00:06:45]                     │ info [x-pack/test/functional/es_archives/uptime/blank] Deleted existing index "heartbeat-8-generated-test"
[00:06:45]                     │ info [x-pack/test/functional/es_archives/uptime/blank] Unloading indices from "data.json"
[00:06:45]                     │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.ds-synthetics-http-default-2021.04.20-000001/DIGFmFhIQP2F0jW8zLAhuA] deleting index
[00:06:45]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Deleted existing index ".ds-synthetics-http-default-2021.04.20-000001"
[00:06:45]                     │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ds-synthetics-http-default-2021.04.20-000001] creating index, cause [api], templates [], shards [1]/[1]
[00:06:45]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Created index ".ds-synthetics-http-default-2021.04.20-000001"
[00:06:45]                     │ debg [x-pack/test/functional/es_archives/uptime/blank_data_stream] ".ds-synthetics-http-default-2021.04.20-000001" settings {"index":{"codec":"best_compression","hidden":"true","lifecycle":{"name":"synthetics"},"mapping":{"total_fields":{"limit":"10000"}},"number_of_replicas":"1","number_of_shards":"1","refresh_interval":"5s"}}
[00:06:45]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.04.20-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [synthetics]
[00:06:45]                     │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ds-synthetics-http-default-2021.09.14-000001] creating index, cause [initialize_data_stream], templates [synthetics], shards [1]/[1]
[00:06:45]                     │ info [o.e.c.m.MetadataCreateDataStreamService] [node-01] adding data stream [synthetics-http-default] with write index [.ds-synthetics-http-default-2021.09.14-000001], backing indices [], and aliases []
[00:06:45]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.09.14-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [synthetics]
[00:06:45]                     │ info [o.e.c.m.MetadataMappingService] [node-01] [.ds-synthetics-http-default-2021.09.14-000001/cWBt-E7qRGGhdqDl8JvdkQ] update_mapping [_doc]
[00:06:45]                   └-> should receive expected results for fleet managed monitors after calling monitor logging
[00:06:45]                     └-> "before each" hook: global before each for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:06:45]                     └-> "before each" hook: clear settings for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:06:45]                       │ debg Deleting saved object {
[00:06:45]                       │        type: 'uptime-dynamic-settings',
[00:06:45]                       │        id: 'uptime-dynamic-settings-singleton'
[00:06:45]                       │      }/%s
[00:06:46]                     └-> "before each" hook: load heartbeat data for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:06:46]                       │ info [x-pack/test/functional/es_archives/uptime/blank] Loading "mappings.json"
[00:06:46]                       │ info [x-pack/test/functional/es_archives/uptime/blank] Loading "data.json"
[00:06:46]                       │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [heartbeat-8-generated-test] creating index, cause [api], templates [], shards [1]/[1]
[00:06:46]                       │ info [x-pack/test/functional/es_archives/uptime/blank] Created index "heartbeat-8-generated-test"
[00:06:46]                       │ debg [x-pack/test/functional/es_archives/uptime/blank] "heartbeat-8-generated-test" settings undefined
[00:06:46]                       │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.04.20-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [synthetics]
[00:06:46]                       │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "mappings.json"
[00:06:46]                       │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "data.json"
[00:06:46]                       │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Skipped restore for existing index ".ds-synthetics-http-default-2021.04.20-000001"
[00:06:46]                       │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.09.14-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [synthetics]
[00:06:46]                     └-> "before each" hook for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:06:46]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.04.20-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [synthetics]
[00:06:46]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.09.14-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [synthetics]
[00:06:46]                     └- ✖ fail: apis uptime uptime REST endpoints with generated data telemetry collectors fleet should receive expected results for fleet managed monitors after calling monitor logging
[00:06:46]                     │       Error: expected { overview_page: 0,
[00:06:46]                     │   monitor_page: 1,
[00:06:46]                     │   no_of_unique_monitors: 4,
[00:06:46]                     │   settings_page: 0,
[00:06:46]                     │   monitor_frequency: [ 0.001, 0.001, 60, 60 ],
[00:06:46]                     │   monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
[00:06:46]                     │   no_of_unique_observer_locations: 3,
[00:06:46]                     │   observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
[00:06:46]                     │   dateRangeStart: [ 'now/d' ],
[00:06:46]                     │   dateRangeEnd: [ 'now/d' ],
[00:06:46]                     │   autoRefreshEnabled: true,
[00:06:46]                     │   autorefreshInterval: [ 100 ],
[00:06:46]                     │   fleet_no_of_unique_monitors: 4,
[00:06:46]                     │   fleet_monitor_frequency: [ 0.001, 0.001, 60, 60 ],
[00:06:46]                     │   fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 } } to sort of equal { overview_page: 0,
[00:06:46]                     │   monitor_page: 1,
[00:06:46]                     │   no_of_unique_monitors: 4,
[00:06:46]                     │   settings_page: 0,
[00:06:46]                     │   monitor_frequency: [ 120, 0.001, 60, 60 ],
[00:06:46]                     │   monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
[00:06:46]                     │   no_of_unique_observer_locations: 3,
[00:06:46]                     │   observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
[00:06:46]                     │   dateRangeStart: [ 'now/d' ],
[00:06:46]                     │   dateRangeEnd: [ 'now/d' ],
[00:06:46]                     │   autoRefreshEnabled: true,
[00:06:46]                     │   autorefreshInterval: [ 100 ],
[00:06:46]                     │   fleet_monitor_frequency: [ 120, 0.001, 60, 60 ],
[00:06:46]                     │   fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
[00:06:46]                     │   fleet_no_of_unique_monitors: 4 }
[00:06:46]                     │       + expected - actual
[00:06:46]                     │ 
[00:06:46]                     │          "dateRangeStart": [
[00:06:46]                     │            "now/d"
[00:06:46]                     │          ]
[00:06:46]                     │          "fleet_monitor_frequency": [
[00:06:46]                     │       +    120
[00:06:46]                     │            0.001
[00:06:46]                     │       -    0.001
[00:06:46]                     │            60
[00:06:46]                     │            60
[00:06:46]                     │          ]
[00:06:46]                     │          "fleet_monitor_name_stats": {
[00:06:46]                     │ --
[00:06:46]                     │            "min_length": 7
[00:06:46]                     │          }
[00:06:46]                     │          "fleet_no_of_unique_monitors": 4
[00:06:46]                     │          "monitor_frequency": [
[00:06:46]                     │       +    120
[00:06:46]                     │            0.001
[00:06:46]                     │       -    0.001
[00:06:46]                     │            60
[00:06:46]                     │            60
[00:06:46]                     │          ]
[00:06:46]                     │          "monitor_name_stats": {
[00:06:46]                     │       
[00:06:46]                     │       at Assertion.assert (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:100:11)
[00:06:46]                     │       at Assertion.eql (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:244:8)
[00:06:46]                     │       at Context.<anonymous> (test/api_integration/apis/uptime/rest/telemetry_collectors_fleet.ts:160:25)
[00:06:46]                     │       at runMicrotasks (<anonymous>)
[00:06:46]                     │       at processTicksAndRejections (internal/process/task_queues.js:95:5)
[00:06:46]                     │       at Object.apply (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
[00:06:46]                     │ 
[00:06:46]                     │ 

Stack Trace

Error: expected { overview_page: 0,
  monitor_page: 1,
  no_of_unique_monitors: 4,
  settings_page: 0,
  monitor_frequency: [ 0.001, 0.001, 60, 60 ],
  monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
  no_of_unique_observer_locations: 3,
  observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
  dateRangeStart: [ 'now/d' ],
  dateRangeEnd: [ 'now/d' ],
  autoRefreshEnabled: true,
  autorefreshInterval: [ 100 ],
  fleet_no_of_unique_monitors: 4,
  fleet_monitor_frequency: [ 0.001, 0.001, 60, 60 ],
  fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 } } to sort of equal { overview_page: 0,
  monitor_page: 1,
  no_of_unique_monitors: 4,
  settings_page: 0,
  monitor_frequency: [ 120, 0.001, 60, 60 ],
  monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
  no_of_unique_observer_locations: 3,
  observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
  dateRangeStart: [ 'now/d' ],
  dateRangeEnd: [ 'now/d' ],
  autoRefreshEnabled: true,
  autorefreshInterval: [ 100 ],
  fleet_monitor_frequency: [ 120, 0.001, 60, 60 ],
  fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
  fleet_no_of_unique_monitors: 4 }
    at Assertion.assert (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:100:11)
    at Assertion.eql (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:244:8)
    at Context.<anonymous> (test/api_integration/apis/uptime/rest/telemetry_collectors_fleet.ts:160:25)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
    at Object.apply (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
  actual: '{\n' +
    '  "autoRefreshEnabled": true\n' +
    '  "autorefreshInterval": [\n' +
    '    100\n' +
    '  ]\n' +
    '  "dateRangeEnd": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "dateRangeStart": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "fleet_monitor_frequency": [\n' +
    '    0.001\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "fleet_monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "fleet_no_of_unique_monitors": 4\n' +
    '  "monitor_frequency": [\n' +
    '    0.001\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "monitor_page": 1\n' +
    '  "no_of_unique_monitors": 4\n' +
    '  "no_of_unique_observer_locations": 3\n' +
    '  "observer_location_name_stats": {\n' +
    '    "avg_length": 4.8\n' +
    '    "max_length": 7\n' +
    '    "min_length": 2\n' +
    '  }\n' +
    '  "overview_page": 0\n' +
    '  "settings_page": 0\n' +
    '}',
  expected: '{\n' +
    '  "autoRefreshEnabled": true\n' +
    '  "autorefreshInterval": [\n' +
    '    100\n' +
    '  ]\n' +
    '  "dateRangeEnd": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "dateRangeStart": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "fleet_monitor_frequency": [\n' +
    '    120\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "fleet_monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "fleet_no_of_unique_monitors": 4\n' +
    '  "monitor_frequency": [\n' +
    '    120\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "monitor_page": 1\n' +
    '  "no_of_unique_monitors": 4\n' +
    '  "no_of_unique_observer_locations": 3\n' +
    '  "observer_location_name_stats": {\n' +
    '    "avg_length": 4.8\n' +
    '    "max_length": 7\n' +
    '    "min_length": 2\n' +
    '  }\n' +
    '  "overview_page": 0\n' +
    '  "settings_page": 0\n' +
    '}',
  showDiff: true
}

Kibana Pipeline / general / X-Pack API Integration Tests.x-pack/test/api_integration/apis/uptime/rest/telemetry_collectors_fleet·ts.apis uptime uptime REST endpoints with generated data telemetry collectors fleet should receive expected results for fleet managed monitors after calling monitor logging

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook in "apis"
[00:06:44]           └-: uptime
[00:06:44]             └-> "before all" hook in "uptime"
[00:06:44]             └-> "before all" hook in "uptime"
[00:06:44]               │ debg No indices to delete [pattern=heartbeat*]
[00:06:49]             └-: uptime REST endpoints
[00:06:49]               └-> "before all" hook in "uptime REST endpoints"
[00:06:49]               └-: with generated data
[00:06:49]                 └-> "before all" hook in "with generated data"
[00:07:08]                 └-: telemetry collectors fleet
[00:07:08]                   │ info [x-pack/test/functional/es_archives/uptime/blank] Unloading indices from "mappings.json"
[00:07:08]                   └-> "before all" hook for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:07:08]                   └-> "before all" hook: generating data for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:07:08]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "mappings.json"
[00:07:08]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "data.json"
[00:07:08]                     │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [heartbeat-8-generated-test/qj1sYFzpTxaRTw1MkUrs6w] deleting index
[00:07:08]                     │ info [x-pack/test/functional/es_archives/uptime/blank] Deleted existing index "heartbeat-8-generated-test"
[00:07:08]                     │ info [x-pack/test/functional/es_archives/uptime/blank] Unloading indices from "data.json"
[00:07:08]                     │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.ds-synthetics-http-default-2021.04.20-000001/a_QaACkrQQ65GGL1GiVKlw] deleting index
[00:07:08]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Deleted existing index ".ds-synthetics-http-default-2021.04.20-000001"
[00:07:08]                     │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ds-synthetics-http-default-2021.04.20-000001] creating index, cause [api], templates [], shards [1]/[1]
[00:07:08]                     │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Created index ".ds-synthetics-http-default-2021.04.20-000001"
[00:07:08]                     │ debg [x-pack/test/functional/es_archives/uptime/blank_data_stream] ".ds-synthetics-http-default-2021.04.20-000001" settings {"index":{"codec":"best_compression","hidden":"true","lifecycle":{"name":"synthetics"},"mapping":{"total_fields":{"limit":"10000"}},"number_of_replicas":"1","number_of_shards":"1","refresh_interval":"5s"}}
[00:07:08]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.04.20-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [synthetics]
[00:07:08]                     │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ds-synthetics-http-default-2021.09.14-000001] creating index, cause [initialize_data_stream], templates [synthetics], shards [1]/[1]
[00:07:08]                     │ info [o.e.c.m.MetadataCreateDataStreamService] [node-01] adding data stream [synthetics-http-default] with write index [.ds-synthetics-http-default-2021.09.14-000001], backing indices [], and aliases []
[00:07:08]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.09.14-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [synthetics]
[00:07:08]                     │ info [o.e.c.m.MetadataMappingService] [node-01] [.ds-synthetics-http-default-2021.09.14-000001/apPZtJhiQwKxz-3HVVyp2Q] update_mapping [_doc]
[00:07:08]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.04.20-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [synthetics]
[00:07:08]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.09.14-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [synthetics]
[00:07:08]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.04.20-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [synthetics]
[00:07:08]                     │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-synthetics-http-default-2021.09.14-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [synthetics]
[00:07:08]                   └-> should receive expected results for fleet managed monitors after calling monitor logging
[00:07:08]                     └-> "before each" hook: global before each for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:07:08]                     └-> "before each" hook: clear settings for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:07:08]                       │ debg Deleting saved object {
[00:07:08]                       │        type: 'uptime-dynamic-settings',
[00:07:08]                       │        id: 'uptime-dynamic-settings-singleton'
[00:07:08]                       │      }/%s
[00:07:09]                     └-> "before each" hook: load heartbeat data for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:07:09]                       │ info [x-pack/test/functional/es_archives/uptime/blank] Loading "mappings.json"
[00:07:09]                       │ info [x-pack/test/functional/es_archives/uptime/blank] Loading "data.json"
[00:07:09]                       │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [heartbeat-8-generated-test] creating index, cause [api], templates [], shards [1]/[1]
[00:07:09]                       │ info [x-pack/test/functional/es_archives/uptime/blank] Created index "heartbeat-8-generated-test"
[00:07:09]                       │ debg [x-pack/test/functional/es_archives/uptime/blank] "heartbeat-8-generated-test" settings undefined
[00:07:09]                       │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "mappings.json"
[00:07:09]                       │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Loading "data.json"
[00:07:09]                       │ info [x-pack/test/functional/es_archives/uptime/blank_data_stream] Skipped restore for existing index ".ds-synthetics-http-default-2021.04.20-000001"
[00:07:09]                     └-> "before each" hook for "should receive expected results for fleet managed monitors after calling monitor logging"
[00:07:09]                     └- ✖ fail: apis uptime uptime REST endpoints with generated data telemetry collectors fleet should receive expected results for fleet managed monitors after calling monitor logging
[00:07:09]                     │       Error: expected { overview_page: 0,
[00:07:09]                     │   monitor_page: 1,
[00:07:09]                     │   no_of_unique_monitors: 4,
[00:07:09]                     │   settings_page: 0,
[00:07:09]                     │   monitor_frequency: [ 0.001, 0.001, 60, 60 ],
[00:07:09]                     │   monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
[00:07:09]                     │   no_of_unique_observer_locations: 3,
[00:07:09]                     │   observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
[00:07:09]                     │   dateRangeStart: [ 'now/d' ],
[00:07:09]                     │   dateRangeEnd: [ 'now/d' ],
[00:07:09]                     │   autoRefreshEnabled: true,
[00:07:09]                     │   autorefreshInterval: [ 100 ],
[00:07:09]                     │   fleet_no_of_unique_monitors: 4,
[00:07:09]                     │   fleet_monitor_frequency: [ 0.001, 0.001, 60, 60 ],
[00:07:09]                     │   fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 } } to sort of equal { overview_page: 0,
[00:07:09]                     │   monitor_page: 1,
[00:07:09]                     │   no_of_unique_monitors: 4,
[00:07:09]                     │   settings_page: 0,
[00:07:09]                     │   monitor_frequency: [ 120, 0.001, 60, 60 ],
[00:07:09]                     │   monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
[00:07:09]                     │   no_of_unique_observer_locations: 3,
[00:07:09]                     │   observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
[00:07:09]                     │   dateRangeStart: [ 'now/d' ],
[00:07:09]                     │   dateRangeEnd: [ 'now/d' ],
[00:07:09]                     │   autoRefreshEnabled: true,
[00:07:09]                     │   autorefreshInterval: [ 100 ],
[00:07:09]                     │   fleet_monitor_frequency: [ 120, 0.001, 60, 60 ],
[00:07:09]                     │   fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
[00:07:09]                     │   fleet_no_of_unique_monitors: 4 }
[00:07:09]                     │       + expected - actual
[00:07:09]                     │ 
[00:07:09]                     │          "dateRangeStart": [
[00:07:09]                     │            "now/d"
[00:07:09]                     │          ]
[00:07:09]                     │          "fleet_monitor_frequency": [
[00:07:09]                     │       +    120
[00:07:09]                     │            0.001
[00:07:09]                     │       -    0.001
[00:07:09]                     │            60
[00:07:09]                     │            60
[00:07:09]                     │          ]
[00:07:09]                     │          "fleet_monitor_name_stats": {
[00:07:09]                     │ --
[00:07:09]                     │            "min_length": 7
[00:07:09]                     │          }
[00:07:09]                     │          "fleet_no_of_unique_monitors": 4
[00:07:09]                     │          "monitor_frequency": [
[00:07:09]                     │       +    120
[00:07:09]                     │            0.001
[00:07:09]                     │       -    0.001
[00:07:09]                     │            60
[00:07:09]                     │            60
[00:07:09]                     │          ]
[00:07:09]                     │          "monitor_name_stats": {
[00:07:09]                     │       
[00:07:09]                     │       at Assertion.assert (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:100:11)
[00:07:09]                     │       at Assertion.eql (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:244:8)
[00:07:09]                     │       at Context.<anonymous> (test/api_integration/apis/uptime/rest/telemetry_collectors_fleet.ts:160:25)
[00:07:09]                     │       at processTicksAndRejections (internal/process/task_queues.js:95:5)
[00:07:09]                     │       at Object.apply (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
[00:07:09]                     │ 
[00:07:09]                     │ 

Stack Trace

Error: expected { overview_page: 0,
  monitor_page: 1,
  no_of_unique_monitors: 4,
  settings_page: 0,
  monitor_frequency: [ 0.001, 0.001, 60, 60 ],
  monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
  no_of_unique_observer_locations: 3,
  observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
  dateRangeStart: [ 'now/d' ],
  dateRangeEnd: [ 'now/d' ],
  autoRefreshEnabled: true,
  autorefreshInterval: [ 100 ],
  fleet_no_of_unique_monitors: 4,
  fleet_monitor_frequency: [ 0.001, 0.001, 60, 60 ],
  fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 } } to sort of equal { overview_page: 0,
  monitor_page: 1,
  no_of_unique_monitors: 4,
  settings_page: 0,
  monitor_frequency: [ 120, 0.001, 60, 60 ],
  monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
  no_of_unique_observer_locations: 3,
  observer_location_name_stats: { min_length: 2, max_length: 7, avg_length: 4.8 },
  dateRangeStart: [ 'now/d' ],
  dateRangeEnd: [ 'now/d' ],
  autoRefreshEnabled: true,
  autorefreshInterval: [ 100 ],
  fleet_monitor_frequency: [ 120, 0.001, 60, 60 ],
  fleet_monitor_name_stats: { min_length: 7, max_length: 22, avg_length: 12 },
  fleet_no_of_unique_monitors: 4 }
    at Assertion.assert (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:100:11)
    at Assertion.eql (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/expect/expect.js:244:8)
    at Context.<anonymous> (test/api_integration/apis/uptime/rest/telemetry_collectors_fleet.ts:160:25)
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
    at Object.apply (/dev/shm/workspace/parallel/20/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
  actual: '{\n' +
    '  "autoRefreshEnabled": true\n' +
    '  "autorefreshInterval": [\n' +
    '    100\n' +
    '  ]\n' +
    '  "dateRangeEnd": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "dateRangeStart": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "fleet_monitor_frequency": [\n' +
    '    0.001\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "fleet_monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "fleet_no_of_unique_monitors": 4\n' +
    '  "monitor_frequency": [\n' +
    '    0.001\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "monitor_page": 1\n' +
    '  "no_of_unique_monitors": 4\n' +
    '  "no_of_unique_observer_locations": 3\n' +
    '  "observer_location_name_stats": {\n' +
    '    "avg_length": 4.8\n' +
    '    "max_length": 7\n' +
    '    "min_length": 2\n' +
    '  }\n' +
    '  "overview_page": 0\n' +
    '  "settings_page": 0\n' +
    '}',
  expected: '{\n' +
    '  "autoRefreshEnabled": true\n' +
    '  "autorefreshInterval": [\n' +
    '    100\n' +
    '  ]\n' +
    '  "dateRangeEnd": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "dateRangeStart": [\n' +
    '    "now/d"\n' +
    '  ]\n' +
    '  "fleet_monitor_frequency": [\n' +
    '    120\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "fleet_monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "fleet_no_of_unique_monitors": 4\n' +
    '  "monitor_frequency": [\n' +
    '    120\n' +
    '    0.001\n' +
    '    60\n' +
    '    60\n' +
    '  ]\n' +
    '  "monitor_name_stats": {\n' +
    '    "avg_length": 12\n' +
    '    "max_length": 22\n' +
    '    "min_length": 7\n' +
    '  }\n' +
    '  "monitor_page": 1\n' +
    '  "no_of_unique_monitors": 4\n' +
    '  "no_of_unique_observer_locations": 3\n' +
    '  "observer_location_name_stats": {\n' +
    '    "avg_length": 4.8\n' +
    '    "max_length": 7\n' +
    '    "min_length": 2\n' +
    '  }\n' +
    '  "overview_page": 0\n' +
    '  "settings_page": 0\n' +
    '}',
  showDiff: true
}

Metrics [docs]

✅ unchanged

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed project:ResilientSavedObjectMigrations Reduce Kibana upgrade failures by making saved object migrations more resilient release_note:fix Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc v7.15.0 v7.16.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Migrations should dynamically adjust batch size to prevent failing on 413 errors from Elasticsearch
8 participants