-
Notifications
You must be signed in to change notification settings - Fork 963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Project.last_serial failed to update. #12933
Comments
I'm trying to think how this could have happened, and I think I have a theory for it. Multiple transactions occurring in isolation,
If that's right, I suspect we can fix it by always issuing a |
Another instance:
Manually resolved. |
Two packages, I wonder if it's another instance of this issue, and if there's someone going to fix it manually. |
Could anybody look into this? Maybe @ewdurbin ? |
Another instance:
Resolved:
|
Another instance:
Resolved:
|
I think that #13936 might fix this. |
Maybe another instance on
|
@TechCiel Manually resolved that for you. In this instance we had two journals for the same project get issued within a ~2 seconds of each other:
|
Thank you @di , really appreciate that quick respond, but the problem is not resolved. Maybe manual update didn't purge the CDN cache?
should be 18769749. |
@TechCiel Good catch. I've issued a purge and this should now be resolved. |
Another package
|
Still. |
The Sadly, we now have another issue:
|
New failure with the |
Is the way to go about this to always just manually come here and request a purge? There is nothing on our client end that can assist with these issues?
|
With #13936 merged now (along with a few follow up PRs to fix some deadlocks), I think that the primary cause of this has been fixed now. The tl;dr is that our mirroring relied on the serial to be a monotonically increasing integer, but due to the way PostgreSQL works, concurrent transactions could end up with serials being "out of order", and #13936 changes that so that transactions that generate new serial numbers are serialized behind what is effectively a mutex. I'm going to close this now, but if anyone sees any new reports of this happening after today, we can re-open this issue. |
Likely but not sure, another instance of this problem, still ongoing. Could someone look into the database? cc @dstufft |
That's a bug for sure caused by a recent merge that causes journal to proceed but not purge for a project. |
Could you help with purging the package while fixing the cause? |
Purges are being issued shortly. |
Should be cleared now. Thanks for the vigilance on this issue @TechCiel, let us know if any more crop up. |
It now shows
That's because I maintain a PyPI mirror, and |
14353030 is the last valid journal entry for the |
I think increasing serial is kind of API promise of pypi.org, and |
The |
Reported by a bandersnatch user in #12214, the
last_serial
value for a Project was out of sync with thejournals
table after what appears to have been an automated mass removal of releases.Note that the
last_serial
column on the project record is not actually the latest serial. Seems the trigger that callsmaintain_project_last_serial()
for thejournals
table failed to update the project. I'll file a separate issue for that.It has been corrected by calling a no-op update on the row for now.
Originally posted by @ewdurbin in #12214 (comment)
The text was updated successfully, but these errors were encountered: