-
Notifications
You must be signed in to change notification settings - Fork 344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_gc
is very flaky
#3537
Labels
Comments
Is it reproducible enough for you that you think you can bisect it? Maybe it started happening after the recent upgrade of |
Appears that it can fail if |
yuja
added a commit
to yuja/jj
that referenced
this issue
Apr 19, 2024
Apparently, these gc() invocations rely on that the previous "git gc" packed all refs so there are no loose refs to compare mtimes. If there were remaining (or new?) loose refs, mtime comparison could fail. Let's add +1sec to effectively turn off the keep_newer option. Fixes jj-vcs#3537
4 tasks
yuja
added a commit
to yuja/jj
that referenced
this issue
Apr 21, 2024
This addresses the test instability. The underlying problem still exists, but it's unlikely to trigger user-facing issues because of that. A repo instance won't be reused after gc() call. Fixes jj-vcs#3537
yuja
added a commit
to yuja/jj
that referenced
this issue
Apr 22, 2024
This addresses the test instability. The underlying problem still exists, but it's unlikely to trigger user-facing issues because of that. A repo instance won't be reused after gc() call. Fixes jj-vcs#3537
yuja
added a commit
to yuja/jj
that referenced
this issue
May 23, 2024
gix 0.63 is now available. jj-vcs#3537
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
test_git_backend::test_gc
fails a relatively large amount of the time for me when I run the whole workspace, makingcargo nextest run --workspace
very difficult to use.Note that I never see this when running solely this test and nothing else; my machine has 12 threads/6 cores, and nextest does well to saturate those cores, so I assume this is another concurrency bug that only happens under load.
The text was updated successfully, but these errors were encountered: