Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Opening workspace takes a very long time, when prebuild generate big artifacts #7002

Closed
shaal opened this issue Dec 1, 2021 · 7 comments
Closed
Labels
meta: never-stale This issue can never become stale

Comments

@shaal
Copy link
Contributor

shaal commented Dec 1, 2021

Bug description

A workspace with optimized custom docker image (340MB with 1 layer), opens quickly, in about 5 seconds.
That same workspace, takes a very long time to open (over a minute), if the prebuild generated big artifacts.

Steps to reproduce

Choose a repo, open it in Gitpod and measure how long it takes to load.
Add init start task in .gitpod.yml, that creates a large file (ie. 5gb) during prebuild.
When prebuild finished, open the workspace and measure how long it takes to load.

Workspace affected

No response

Expected behavior

  • I would think that opening workspace shouldn't take at least 1 minute longer, because a 5gb file was created in prebuild.
  • The interface only displays "Pulling container image" for a very long time, with no other details. It seems that more things are happening after pulling container image, but there's no visibility to what is happening.

image

Example repository

I created a very simple example to replicate the issue I am seeing.
Compare opening in Gitpod main branch of this repo, and PR #1

Only 5 seconds - https://github.com/shaal/gitpod-image-speed-test
Over 1 minute - shaal/gitpod-image-speed-test#1

Both main and the PR use 340MB custom Docker image with 1 layer.
The only difference between the 2?

  • The PR adds 5GB dummy file during prebuild.

Anything else?

I demoed this issue during today's Gitpod Office Hours.
cc: @rfay @mikenikles @mrsimonemms @jldec

@shaal
Copy link
Contributor Author

shaal commented Dec 1, 2021

@aledbf
Copy link
Member

aledbf commented Dec 2, 2021

I would think that opening workspace shouldn't take at least 1 minute longer, because a 5gb file was created in rebuild.

@shaal we still need to transfer the additional 5Gb to the node from the container registry. Keep in mind that the nodes in the cluster rotate periodically (download the image again), and also, you can open the same repository and land in two different nodes.

That said, we are actively working on improving the build process, layers, caching, and distribution to improve scenarios like the one you are presenting.

@jmls
Copy link

jmls commented Dec 21, 2021

I'm having the same issue, with the exception that my time differences are on a workspace that is simply opened, stopped and reopened. Using the workspace-full image and a prebuild

Times vary from 45s to over 2:30 for the same workspace with no changes

@stale
Copy link

stale bot commented Mar 25, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the meta: stale This issue/PR is stale and will be closed soon label Mar 25, 2022
@shaal
Copy link
Contributor Author

shaal commented Mar 25, 2022

Please add never-stale label to this issue

@stale stale bot removed the meta: stale This issue/PR is stale and will be closed soon label Mar 25, 2022
@mrsimonemms mrsimonemms added the meta: never-stale This issue can never become stale label Mar 26, 2022
@sagor999
Copy link
Contributor

Once this epic is done and shipped, it should greatly improve start up times of workspaces that contain big artifacts.

@atduarte
Copy link
Contributor

We are actively working to mitigate this issue with epic #9018 — by increasing the chances the image is cached and lazy pulling the image layers. And, as @sagor999 mentioned, epic #7901 should also contribute to improve the situation.

Given we are actively working on both epics that will fix this, I would suggest to close this issue and follow up the epics directly. Feel free to reopen if you don't agree or have further concerns 🙌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
meta: never-stale This issue can never become stale
Projects
No open projects
Archived in project
Development

No branches or pull requests

6 participants