-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core.savedObjects] Fix maxImportExportSize config & update docs. #94019
Conversation
@elasticmachine merge upstream |
@@ -28,7 +28,7 @@ export const savedObjectsConfig = { | |||
path: 'savedObjects', | |||
schema: schema.object({ | |||
maxImportPayloadBytes: schema.byteSize({ defaultValue: 26214400 }), | |||
maxImportExportSize: schema.byteSize({ defaultValue: 10000 }), | |||
maxImportExportSize: schema.number({ defaultValue: 10000 }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One open question is whether we want to increase this default to something higher than 10k now that we aren't bound by the index.max_result_window
I did a quick and dirty test locally to see how memory would be affected: I indexed multiple copies of a relatively small index pattern saved object (600 bytes), then examined the metrics.ops
logs to compare memory consumption before & after running an export on the objects:
# of objects | Memory on export start | Memory on export end | % increase |
10k | 246MB | 276MB | ~12% |
20k | 314MB | 413MB | ~32% |
30k | 298MB | 456MB | ~53% |
The other noticeable difference was that by the time it hit 30k objects, the export was taking >30s, and the event loop delay spiked to 13s. 😬
This was an admittedly imperfect test -- I'd need to do a few more runs to get to something conclusive -- but wanted to post the initial results here for others.
Do we have a sense as to how often folks have run into issues with hitting the 10k export limit? Trying to weigh how seriously we should consider increasing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Does the memory go back to closer to the start value after a minute or two? I'm curious if we just have a memory leak here.
- Do we stream the results back to the browser as we get them or do we cache everything in memory before sending the final payload?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the memory go back to closer to the start value after a minute or two? I'm curious if we just have a memory leak here.
I don't think it's a memory leak, after awhile the memory drops back down to the same levels as before the export was initiated. Will post more results on this shortly.
Do we stream the results back to the browser as we get them or do we cache everything in memory before sending the final payload?
The SO exporter collects everything in memory before streaming the response back to the browser. It appears we do this because the results all get sorted by id, so we need the full collection before sorting (which is very likely part of the slowness). That's also the main use case for maxImportExportSize
at this point, as it would prevent the kibana server from using too much memory when a large export is happening. (Ideally we'd stream these results back directly as we get them, but that would mean they are unsorted... not sure how important it is to sort everything or why we chose to do this?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I finally got some time to look more deeply at this. Here's a summary of findings:
Method
- Run Kibana in dev mode:
- Set
ops.interval: 1000
so we are getting fresh metrics every second - Enable debug logs for
savedobjects-service.exporter
andmetrics.ops
- Set
- Use the SO import API to import 10k, 20k, and 30k identical index pattern saved objects (600 bytes each). Use
createNewCopies=true
so each one gets a unique ID and treated as a new object. - Restart the Kibana server. Wait until all of the startup activity slows down and the memory metrics begin to stabilize.
- Hit the SO export API directly to export all
index-pattern
objects - Capture the logs during the export, including an average of the metrics for the 5 seconds before and export was initiated, and the 5 seconds after it was completed.
- Restart Kibana server before each new test
Findings
Did 3 runs for each scenario, approximate averages for each are presented below with some outliers dropped. Numbers may not add up precisely as they are rounded to 1 decimal.
# objects | avg memory on export start | avg memory on export end | % increase |
---|---|---|---|
10k | 228.5 | 276.0 | 20.8% |
20k | 224.3 | 278.4 | 24.2% |
30k | 225.4 | 387.8 | 72.0% |
Observations
In general the findings were directionally consistent with the quick test I did yesterday: There appears to be a modest jump when going from 10k to 20k objects, and a much more significant jump from 20k to 30k. Each run of the 30k tests also had much more variance: I did a few extra runs and some came out in a ~55-65% range, while others were closer to ~85-86%. Not sure what to make of this discrepancy, but regardless we still see a much higher jump from 20-30k.
I didn't crunch precise numbers on this, but I also saw a jump in the event loop delay as the object count increased: 10k objects ended with a delay around ~1-3s, 20k ~3-5s, 30k ~5-10s. This leads me to believe the 30s delay I experienced with 30k objects yesterday may be an anomaly.
Lastly, I noticed that based on the log timestamps, the slowest single step in the export is when the objects are being processed for export: 10k ~2s, 20k ~10s, 30k ~22s. This is the period of time where the results are running through the processObjects
exporter method, which handles applying the export transforms, optionally fetching references (which I did not do in my testing), and then sorting the objects before creating a stream with the results.
TLDR
I think we could consider bumping the default to 20k as it generally presents a modest increase in memory consumption & delay, however I'd hesitate to take it as far as 30k based on these findings. FWIW I also don't think we need to increase the default at all, IMHO that depends on how often we are asked about exporting more than 10k. As the setting remains configurable, we should optimize for whatever will cover the vast majority of use cases.
Lastly, at some point we should do some proper profiling on the exporter itself. I don't know exactly where in processObjects
the bottlenecks are based on the metrics logs alone, but the object sorting sticks out to me. Perhaps there's a technical reason I'm not aware of that we are doing this... But if we could simply stream back results as we get them, applying transformations along the way without sorting, I expect we'd see a pretty big increase in performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into this!
The import/export API was designed to be streaming but the implementation was never done and it has never come up as a requirement. We might be exporting much more saved objects in the next few releases so I created #94552 to investigate the risk (also added some context around the reason for the sorting).
You mentioned that you averaged the numbers and removed outliers. For this profiling I think we should rather be looking at the worst case scenarios and outliers than the averages. Even if it doesn't happen every time, if under certain circumstances the garbage collector is slow and we see a much bigger spike, we should be using that spike to decide if a given export size is safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pondering on this a bit more...
I think it might make sense to leave the default as-is until we do #94552 and have a better sense of what default will actually work for the majority of users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this profiling I think we should rather be looking at the worst case scenarios and outliers than the averages
The outliers I dropped in averaging the numbers were all ones that were unusually low, so the numbers above should still reflect a worst-case scenario. If I were to include the outliers, the numbers would all drop slightly.
I think it might make sense to leave the default as-is until we do #94552 and have a better sense of what default will actually work for the majority of users.
++ Agreed... if we plan to make a dedicated effort around this, then changing the default now feels unnecessary.
Pinging @elastic/kibana-core (Team:Core) |
@elasticmachine merge upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to add a deprecation for this in src/core/server/config/deprecation/core_deprecations.ts
?
@pgayvallet IMO we should wait until we come to some conclusions in #94552, as that might affect what we do with this config. (Edit: Unless we know for certain that we want to get rid of this eventually, but I don't think we do... do we?) |
@elasticmachine merge upstream |
@elasticmachine merge upstream |
@elasticmachine merge upstream |
💚 Build Succeeded
Metrics [docs]
History
To update your PR or re-run it, just comment with: cc @lukeelmers |
…4019) (#94999) Co-authored-by: Luke Elmers <[email protected]>
Closes #91172
maxImportExportSize
maxImportExportSize
setting from exportSizeExceeded errormaxImportExportSizeInBytes
->maxImportExportSize