Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core.savedObjects] Fix maxImportExportSize config & update docs. #94019
[core.savedObjects] Fix maxImportExportSize config & update docs. #94019
Changes from 9 commits
c5f5d97
33c06a3
f1d71a4
7c15652
09d6b69
056f721
e9c2b75
1b3de31
4f08624
02f303d
d178b60
b351ac9
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One open question is whether we want to increase this default to something higher than 10k now that we aren't bound by the
index.max_result_window
I did a quick and dirty test locally to see how memory would be affected: I indexed multiple copies of a relatively small index pattern saved object (600 bytes), then examined the
metrics.ops
logs to compare memory consumption before & after running an export on the objects:The other noticeable difference was that by the time it hit 30k objects, the export was taking >30s, and the event loop delay spiked to 13s. 😬
This was an admittedly imperfect test -- I'd need to do a few more runs to get to something conclusive -- but wanted to post the initial results here for others.
Do we have a sense as to how often folks have run into issues with hitting the 10k export limit? Trying to weigh how seriously we should consider increasing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's a memory leak, after awhile the memory drops back down to the same levels as before the export was initiated. Will post more results on this shortly.
The SO exporter collects everything in memory before streaming the response back to the browser. It appears we do this because the results all get sorted by id, so we need the full collection before sorting (which is very likely part of the slowness). That's also the main use case for
maxImportExportSize
at this point, as it would prevent the kibana server from using too much memory when a large export is happening. (Ideally we'd stream these results back directly as we get them, but that would mean they are unsorted... not sure how important it is to sort everything or why we chose to do this?)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I finally got some time to look more deeply at this. Here's a summary of findings:
Method
ops.interval: 1000
so we are getting fresh metrics every secondsavedobjects-service.exporter
andmetrics.ops
createNewCopies=true
so each one gets a unique ID and treated as a new object.index-pattern
objectsFindings
Did 3 runs for each scenario, approximate averages for each are presented below with some outliers dropped. Numbers may not add up precisely as they are rounded to 1 decimal.
Observations
In general the findings were directionally consistent with the quick test I did yesterday: There appears to be a modest jump when going from 10k to 20k objects, and a much more significant jump from 20k to 30k. Each run of the 30k tests also had much more variance: I did a few extra runs and some came out in a ~55-65% range, while others were closer to ~85-86%. Not sure what to make of this discrepancy, but regardless we still see a much higher jump from 20-30k.
I didn't crunch precise numbers on this, but I also saw a jump in the event loop delay as the object count increased: 10k objects ended with a delay around ~1-3s, 20k ~3-5s, 30k ~5-10s. This leads me to believe the 30s delay I experienced with 30k objects yesterday may be an anomaly.
Lastly, I noticed that based on the log timestamps, the slowest single step in the export is when the objects are being processed for export: 10k ~2s, 20k ~10s, 30k ~22s. This is the period of time where the results are running through the
processObjects
exporter method, which handles applying the export transforms, optionally fetching references (which I did not do in my testing), and then sorting the objects before creating a stream with the results.TLDR
I think we could consider bumping the default to 20k as it generally presents a modest increase in memory consumption & delay, however I'd hesitate to take it as far as 30k based on these findings. FWIW I also don't think we need to increase the default at all, IMHO that depends on how often we are asked about exporting more than 10k. As the setting remains configurable, we should optimize for whatever will cover the vast majority of use cases.
Lastly, at some point we should do some proper profiling on the exporter itself. I don't know exactly where in
processObjects
the bottlenecks are based on the metrics logs alone, but the object sorting sticks out to me. Perhaps there's a technical reason I'm not aware of that we are doing this... But if we could simply stream back results as we get them, applying transformations along the way without sorting, I expect we'd see a pretty big increase in performance.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into this!
The import/export API was designed to be streaming but the implementation was never done and it has never come up as a requirement. We might be exporting much more saved objects in the next few releases so I created #94552 to investigate the risk (also added some context around the reason for the sorting).
You mentioned that you averaged the numbers and removed outliers. For this profiling I think we should rather be looking at the worst case scenarios and outliers than the averages. Even if it doesn't happen every time, if under certain circumstances the garbage collector is slow and we see a much bigger spike, we should be using that spike to decide if a given export size is safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pondering on this a bit more...
I think it might make sense to leave the default as-is until we do #94552 and have a better sense of what default will actually work for the majority of users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The outliers I dropped in averaging the numbers were all ones that were unusually low, so the numbers above should still reflect a worst-case scenario. If I were to include the outliers, the numbers would all drop slightly.
++ Agreed... if we plan to make a dedicated effort around this, then changing the default now feels unnecessary.