-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
slow git blame on files with large history leading to OOM kills #5110
Comments
What happens if you do the same thing on a repo cloned without |
I can reproduce a super slow
Takes minutes. I wonder if git has some sort of inflate operation, that would allow us to fetch the blobs async after the content initialization. |
been thinking the same thing - had a look a few days ago while looking into the Also, we should double check the Git issues. Is this resource demand/bad performance a known issue? |
Edit: This doesn't do the job! Just stumbled over
which seems to do the trick. Source |
True, I ran it on a repo that was already inflated by running Generally, the inflation wouldn't help us much if we were running it eagerly, because many users would not wait for git clone but for content init from a prebuild. So if we always inflate it we are back to the same size on disk. |
Discussed on the engineering call that to add it to the groundwork inbox to research on the solution which can preserve partial cloning and inflate the workspace afterwards when it is needed. Maybe add a flag in .gipod.yml to enable such inflation for special cases, like this one and #4914 |
|
Let's investigate this with a timebox of one day. /schedule |
I think that we should make the partial clone an opt-in feature that one can enable through It's too bad this improvement is not transparent enough to make it opt-out, but the impact of not being able to fork (#4914) and sometimes experiencing oom kills and/or long waiting times on git blame is surprising and will leave users with a bad experience. I'd rather compromise on the loading speed than the functional experience. |
Discussed with @svenefftinge We are reverting it till there is no better solution which does not break blame but most importantly fork. |
partial clone was revered in #5152 |
Bug description
Since last 2 weeks I start getting OOM kills while working on https://github.com/gitpod-io/vscode repo. After investigation it turned out that doing some git operations causes git gc/repack which runs for long sometimes and consumes memory. So newly started processes cannot get more memory and gets killed, eventually leading to the workspace eviction. I found several triggers in my settings: (1) i had auto enabled gti fetch, (2) git blame via gitlens, as wells as (3) git push causes git gc from time to time. In particular blaming via GitLens can get unresponsive for minutes. You can see it on the screenshot below:
Don't pay attention to htop saying 33.G/62.8G. It does not know about gcroups. Instead have a look at the status bar, it shows almost 100% utilisation of memory limited by cgroups, which is used to trigger OOM kills. It is computed here: https://github.com/akosyakov/gitpod-monitor/blob/96d342a3abb13e6608dbd48fdf9efbc4c0700b55/src/extension.ts#L37-L39
I suspect a cause for it is the partial cloning.
Steps to reproduce
Install GitLens and use blame on different files in https://github.com/gitpod-io/vscode while running some task terminals. My particular trigger was doing
yarn gitpod-min
.Expected behavior
No response
Example repository
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: