Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fleet] Change default batch size #161249

Merged
merged 1 commit into from
Jul 5, 2023

Conversation

nchaulet
Copy link
Member

@nchaulet nchaulet commented Jul 5, 2023

Summary

Resolve #158361

Change default batch size for schema version upgrade from 100 concurrent policies to 2 to reduce memory usage.

When we have to do a schema change on the policy we need to update all the policies, the current code was doing 100 policies concurently that could cause some memory issue on small Kibana instance. (as we recommend to have at max 500 agent policies, and that upgrade is triggered asynchronously during Kibana start it should still happen in a reasonable amount of time)

Alternative solution

Instead of configuring this as a default we could configure this in the stackpack only for small Kibana instances

@nchaulet nchaulet added the Team:Fleet Team label for Observability Data Collection Fleet team label Jul 5, 2023
@nchaulet nchaulet self-assigned this Jul 5, 2023
@nchaulet nchaulet requested a review from a team as a code owner July 5, 2023 11:43
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Team:Fleet)

@nchaulet nchaulet added the release_note:skip Skip the PR/issue when compiling release notes label Jul 5, 2023
@apmmachine
Copy link
Contributor

🤖 GitHub comments

Expand to view the GitHub comments

Just comment with:

  • /oblt-deploy : Deploy a Kibana instance using the Observability test environments.
  • run elasticsearch-ci/docs : Re-trigger the docs validation. (use unformatted text in the comment!)

@jlind23
Copy link
Contributor

jlind23 commented Jul 5, 2023

@nchaulet do you think that on the long term we could have a batch size inferred from the available memory?

@kibana-ci
Copy link
Collaborator

💚 Build Succeeded

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
fleet 975.2KB 975.4KB +215.0B
Unknown metric groups

ESLint disabled line counts

id before after diff
enterpriseSearch 14 16 +2
securitySolution 410 414 +4
total +6

Total ESLint disabled count

id before after diff
enterpriseSearch 15 17 +2
securitySolution 489 493 +4
total +6

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @nchaulet

@nchaulet
Copy link
Member Author

nchaulet commented Jul 5, 2023

@nchaulet do you think that on the long term we could have a batch size inferred from the available memory?

Yes it probably something we can improve in the long term

Also on your question offline on when do we trigger those upgrade it happens when we need a new feature deployed all agent policies, for example in 8.8 the schema was bumped to introduce agent protection features to the agent policy

@nchaulet nchaulet added the v8.9.0 label Jul 5, 2023
@nchaulet nchaulet merged commit 8ef1287 into elastic:main Jul 5, 2023
@nchaulet nchaulet deleted the fix-default-batch-size branch July 5, 2023 13:49
kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Jul 5, 2023
@kibanamachine
Copy link
Contributor

💚 All backports created successfully

Status Branch Result
8.9

Note: Successful backport PRs will be merged automatically after passing CI.

Questions ?

Please refer to the Backport tool documentation

kibanamachine added a commit that referenced this pull request Jul 5, 2023
# Backport

This will backport the following commits from `main` to `8.9`:
- [[Fleet] Change default batch size
(#161249)](#161249)

<!--- Backport version: 8.9.7 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Nicolas
Chaulet","email":"[email protected]"},"sourceCommit":{"committedDate":"2023-07-05T13:49:38Z","message":"[Fleet]
Change default batch size
(#161249)","sha":"8ef128701757ad24617e0c97ab8e0a186bc87ec2","branchLabelMapping":{"^v8.10.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","Team:Fleet","v8.9.0","v8.10.0"],"number":161249,"url":"https://github.com/elastic/kibana/pull/161249","mergeCommit":{"message":"[Fleet]
Change default batch size
(#161249)","sha":"8ef128701757ad24617e0c97ab8e0a186bc87ec2"}},"sourceBranch":"main","suggestedTargetBranches":["8.9"],"targetPullRequestStates":[{"branch":"8.9","label":"v8.9.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.10.0","labelRegex":"^v8.10.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/161249","number":161249,"mergeCommit":{"message":"[Fleet]
Change default batch size
(#161249)","sha":"8ef128701757ad24617e0c97ab8e0a186bc87ec2"}}]}]
BACKPORT-->

Co-authored-by: Nicolas Chaulet <[email protected]>
@@ -519,6 +520,17 @@ export const AgentListPage: React.FunctionComponent<{}> = () => {
<EuiSpacer size="l" />
</>
)}
{/* TODO serverless agent soft limit */}
{showUnhealthyCallout && (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nchaulet Is this an accidental change? I am seeing a double notification on Fleet UI on main:

image

kilfoyle added a commit that referenced this pull request Jul 12, 2023
Adding #161249 (Kibana can run out
of memory during an upgrade when there are many Fleet agent policies in
place) to known issues for 8.8.x.

---------

Co-authored-by: David Kilfoyle <[email protected]>
kilfoyle added a commit to kilfoyle/kibana that referenced this pull request Jul 12, 2023
Adding elastic#161249 (Kibana can run out
of memory during an upgrade when there are many Fleet agent policies in
place) to known issues for 8.8.x.

---------

Co-authored-by: David Kilfoyle <[email protected]>
kilfoyle pushed a commit to kilfoyle/kibana that referenced this pull request Jul 12, 2023
Adding elastic#161249 (Kibana can run out
of memory during an upgrade when there are many Fleet agent policies in
place) to known issues for 8.8.x.

---------

Co-authored-by: David Kilfoyle <[email protected]>
(cherry picked from commit e5cce01)
kilfoyle added a commit to kilfoyle/kibana that referenced this pull request Jul 12, 2023
Adding elastic#161249 (Kibana can run out
of memory during an upgrade when there are many Fleet agent policies in
place) to known issues for 8.8.x.

---------

Co-authored-by: David Kilfoyle <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release_note:skip Skip the PR/issue when compiling release notes Team:Fleet Team label for Observability Data Collection Fleet team v8.9.0 v8.10.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Fleet]: Kibana upgrade failed from 8.7.1>8.8.0 BC8 when multiple agent policies with integrations exist.
7 participants