Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fleet] Integration listing performance - integration grid rendering #122660

Closed
nchaulet opened this issue Jan 11, 2022 · 2 comments · Fixed by #132455
Closed

[Fleet] Integration listing performance - integration grid rendering #122660

nchaulet opened this issue Jan 11, 2022 · 2 comments · Fixed by #132455
Assignees
Labels
performance Team:Fleet Team label for Observability Data Collection Fleet team

Comments

@nchaulet
Copy link
Member

nchaulet commented Jan 11, 2022

Description

Related to #118751

Rendering the whole integration list is slow with the current number of integration (256)

You can see in this CPU profile (with cpu throttling) the two big block of blocking JS one we render the list the first time with the non Fleet integration and the second time with Fleet integration .

We also load all the icon for each package that could be problematic

Screen Shot 2022-01-11 at 9 39 19 AM

profile-integration-list.json.zip

Potential solution

Maybe we can virtualize that list to do not render the 256 cards when not needed.

Maybe just lazy loading the PackageIcon that seems one of the most problematic issue could help.

@nchaulet nchaulet added the Team:Fleet Team label for Observability Data Collection Fleet team label Jan 11, 2022
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Team:Fleet)

@joshdover
Copy link
Contributor

Let's scope this to only lazy loading icons using the built-in browser support for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Team:Fleet Team label for Observability Data Collection Fleet team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants