You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reposilite is unable to recover when S3 connection is lost or interrupted during the upload operation. Currently, in most cases, it's caused by the inefficient storage provider implementation that requests S3 service on each Reposilite's API request:
In a result, affected users need to manually remove files from a build that was only partially uploaded. It might be a good idea to introduce some mechanisms to prevent this kind of incident. Some enhancements in this area:
S3 request queue
Firstly, we could handle S3 requests via a queue. The queue may also support request priority, so we can distinguish uploads (higher priority) from downloads (lower priority). In that scenario, queue should be handled by a standalone executor service operating on fixed number of threads, so it'd naturally work quite well with any kind of rate limiters.
Exponential back-off
Instead of restreaming the artifact to the S3, Reposilite can store files locally and then try to reupload them to the S3. Retry operation should be based on the exponentially increased delay for each failed attempt.
Extra note: if possible, retry operation may try to use Retry-After value (if available) to determine the delay (in seconds)
Reposilite is unable to recover when S3 connection is lost or interrupted during the upload operation. Currently, in most cases, it's caused by the inefficient storage provider implementation that requests S3 service on each Reposilite's API request:
In a result, affected users need to manually remove files from a build that was only partially uploaded. It might be a good idea to introduce some mechanisms to prevent this kind of incident. Some enhancements in this area:
S3 request queue
Firstly, we could handle S3 requests via a queue. The queue may also support request priority, so we can distinguish uploads (higher priority) from downloads (lower priority). In that scenario, queue should be handled by a standalone executor service operating on fixed number of threads, so it'd naturally work quite well with any kind of rate limiters.
Exponential back-off
Instead of restreaming the artifact to the S3, Reposilite can store files locally and then try to reupload them to the S3. Retry operation should be based on the exponentially increased delay for each failed attempt.
Extra note: if possible, retry operation may try to use Retry-After value (if available) to determine the delay (in seconds)
~ Reported on Discord: https://discord.com/channels/204728244434501632/875148821842231296/1179369870052294686
The text was updated successfully, but these errors were encountered: