Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core.savedObjects] Fix maxImportExportSize config & update docs. #94019

Merged
merged 12 commits into from
Mar 18, 2021

Conversation

lukeelmers
Copy link
Member

Closes #91172

  • Adds documentation for maxImportExportSize
  • Mentions maxImportExportSize setting from exportSizeExceeded error
  • Updates incorrect validation for the config (should be number, but was inadvertently validated as bytes)
  • Updates core usage collection schema for clarity: maxImportExportSizeInBytes -> maxImportExportSize

@lukeelmers lukeelmers added Feature:Saved Objects v8.0.0 release_note:skip Skip the PR/issue when compiling release notes docs v7.13.0 labels Mar 8, 2021
@lukeelmers lukeelmers self-assigned this Mar 8, 2021
@lukeelmers
Copy link
Member Author

@elasticmachine merge upstream

@@ -28,7 +28,7 @@ export const savedObjectsConfig = {
path: 'savedObjects',
schema: schema.object({
maxImportPayloadBytes: schema.byteSize({ defaultValue: 26214400 }),
maxImportExportSize: schema.byteSize({ defaultValue: 10000 }),
maxImportExportSize: schema.number({ defaultValue: 10000 }),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One open question is whether we want to increase this default to something higher than 10k now that we aren't bound by the index.max_result_window

I did a quick and dirty test locally to see how memory would be affected: I indexed multiple copies of a relatively small index pattern saved object (600 bytes), then examined the metrics.ops logs to compare memory consumption before & after running an export on the objects:

# of objects Memory on export start Memory on export end % increase
10k 246MB 276MB ~12%
20k 314MB 413MB ~32%
30k 298MB 456MB ~53%

The other noticeable difference was that by the time it hit 30k objects, the export was taking >30s, and the event loop delay spiked to 13s. 😬

This was an admittedly imperfect test -- I'd need to do a few more runs to get to something conclusive -- but wanted to post the initial results here for others.

Do we have a sense as to how often folks have run into issues with hitting the 10k export limit? Trying to weigh how seriously we should consider increasing it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Does the memory go back to closer to the start value after a minute or two? I'm curious if we just have a memory leak here.
  • Do we stream the results back to the browser as we get them or do we cache everything in memory before sending the final payload?

Copy link
Member Author

@lukeelmers lukeelmers Mar 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the memory go back to closer to the start value after a minute or two? I'm curious if we just have a memory leak here.

I don't think it's a memory leak, after awhile the memory drops back down to the same levels as before the export was initiated. Will post more results on this shortly.

Do we stream the results back to the browser as we get them or do we cache everything in memory before sending the final payload?

The SO exporter collects everything in memory before streaming the response back to the browser. It appears we do this because the results all get sorted by id, so we need the full collection before sorting (which is very likely part of the slowness). That's also the main use case for maxImportExportSize at this point, as it would prevent the kibana server from using too much memory when a large export is happening. (Ideally we'd stream these results back directly as we get them, but that would mean they are unsorted... not sure how important it is to sort everything or why we chose to do this?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I finally got some time to look more deeply at this. Here's a summary of findings:

Method

  • Run Kibana in dev mode:
    • Set ops.interval: 1000 so we are getting fresh metrics every second
    • Enable debug logs for savedobjects-service.exporter and metrics.ops
  • Use the SO import API to import 10k, 20k, and 30k identical index pattern saved objects (600 bytes each). Use createNewCopies=true so each one gets a unique ID and treated as a new object.
  • Restart the Kibana server. Wait until all of the startup activity slows down and the memory metrics begin to stabilize.
  • Hit the SO export API directly to export all index-pattern objects
  • Capture the logs during the export, including an average of the metrics for the 5 seconds before and export was initiated, and the 5 seconds after it was completed.
  • Restart Kibana server before each new test

Findings

Did 3 runs for each scenario, approximate averages for each are presented below with some outliers dropped. Numbers may not add up precisely as they are rounded to 1 decimal.

# objects avg memory on export start avg memory on export end % increase
10k 228.5 276.0 20.8%
20k 224.3 278.4 24.2%
30k 225.4 387.8 72.0%

Observations

In general the findings were directionally consistent with the quick test I did yesterday: There appears to be a modest jump when going from 10k to 20k objects, and a much more significant jump from 20k to 30k. Each run of the 30k tests also had much more variance: I did a few extra runs and some came out in a ~55-65% range, while others were closer to ~85-86%. Not sure what to make of this discrepancy, but regardless we still see a much higher jump from 20-30k.

I didn't crunch precise numbers on this, but I also saw a jump in the event loop delay as the object count increased: 10k objects ended with a delay around ~1-3s, 20k ~3-5s, 30k ~5-10s. This leads me to believe the 30s delay I experienced with 30k objects yesterday may be an anomaly.

Lastly, I noticed that based on the log timestamps, the slowest single step in the export is when the objects are being processed for export: 10k ~2s, 20k ~10s, 30k ~22s. This is the period of time where the results are running through the processObjects exporter method, which handles applying the export transforms, optionally fetching references (which I did not do in my testing), and then sorting the objects before creating a stream with the results.

TLDR

I think we could consider bumping the default to 20k as it generally presents a modest increase in memory consumption & delay, however I'd hesitate to take it as far as 30k based on these findings. FWIW I also don't think we need to increase the default at all, IMHO that depends on how often we are asked about exporting more than 10k. As the setting remains configurable, we should optimize for whatever will cover the vast majority of use cases.

Lastly, at some point we should do some proper profiling on the exporter itself. I don't know exactly where in processObjects the bottlenecks are based on the metrics logs alone, but the object sorting sticks out to me. Perhaps there's a technical reason I'm not aware of that we are doing this... But if we could simply stream back results as we get them, applying transformations along the way without sorting, I expect we'd see a pretty big increase in performance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking into this!

The import/export API was designed to be streaming but the implementation was never done and it has never come up as a requirement. We might be exporting much more saved objects in the next few releases so I created #94552 to investigate the risk (also added some context around the reason for the sorting).

You mentioned that you averaged the numbers and removed outliers. For this profiling I think we should rather be looking at the worst case scenarios and outliers than the averages. Even if it doesn't happen every time, if under certain circumstances the garbage collector is slow and we see a much bigger spike, we should be using that spike to decide if a given export size is safe.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pondering on this a bit more...

I think it might make sense to leave the default as-is until we do #94552 and have a better sense of what default will actually work for the majority of users.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this profiling I think we should rather be looking at the worst case scenarios and outliers than the averages

The outliers I dropped in averaging the numbers were all ones that were unusually low, so the numbers above should still reflect a worst-case scenario. If I were to include the outliers, the numbers would all drop slightly.

I think it might make sense to leave the default as-is until we do #94552 and have a better sense of what default will actually work for the majority of users.

++ Agreed... if we plan to make a dedicated effort around this, then changing the default now feels unnecessary.

@lukeelmers lukeelmers added the Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc label Mar 11, 2021
@lukeelmers lukeelmers marked this pull request as ready for review March 11, 2021 05:49
@lukeelmers lukeelmers requested review from a team as code owners March 11, 2021 05:49
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-core (Team:Core)

@lukeelmers lukeelmers requested a review from rudolf March 11, 2021 05:50
@lukeelmers
Copy link
Member Author

@elasticmachine merge upstream

Copy link
Contributor

@pgayvallet pgayvallet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to add a deprecation for this in src/core/server/config/deprecation/core_deprecations.ts ?

docs/setup/settings.asciidoc Outdated Show resolved Hide resolved
@lukeelmers
Copy link
Member Author

lukeelmers commented Mar 16, 2021

Do we want to add a deprecation for this in src/core/server/config/deprecation/core_deprecations.ts ?

@pgayvallet IMO we should wait until we come to some conclusions in #94552, as that might affect what we do with this config. (Edit: Unless we know for certain that we want to get rid of this eventually, but I don't think we do... do we?)

@lukeelmers
Copy link
Member Author

@elasticmachine merge upstream

@lukeelmers lukeelmers requested review from rudolf and removed request for a team March 16, 2021 22:26
@lukeelmers
Copy link
Member Author

@elasticmachine merge upstream

@lukeelmers
Copy link
Member Author

@elasticmachine merge upstream

@lukeelmers lukeelmers added the auto-backport Deprecated - use backport:version if exact versions are needed label Mar 18, 2021
@lukeelmers lukeelmers enabled auto-merge (squash) March 18, 2021 21:26
@kibanamachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

✅ unchanged

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @lukeelmers

@kibanamachine
Copy link
Contributor

💚 Backport successful

7.x / #94999

This backport PR will be merged automatically after passing CI.

@lukeelmers lukeelmers deleted the fix/max-import-export branch March 19, 2021 01:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed docs Feature:Saved Objects release_note:skip Skip the PR/issue when compiling release notes Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc v7.13.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[core.savedObjects] Documentation & cleanup for maxImportExportSize config
6 participants