-
-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failure uploading large files (handling slowDown) #479
Comments
I did not find anything about minio-js handling of slowDown responses, I don't think it is supported. So either this needs to be handled here, or perhaps the AWS S3 client might have support to handle this? In any case, the request would need to be retried after some timeout (probably with some increasing delay factor in case the server is not yet ready to proceed). |
@wvengen From minio/minio#11147, it seems like maybe one way of approaching this would be to configure the Minio server's In Browsertrix Cloud, we should be able to set this as an env var to a higher value if needed in Otherwise, would need to set that however Minio is being deployed. |
Thanks for your response! |
Hm, good point. I don't think we've tested with OpenStack SWIFT, so we haven't seen this issue, but you're right that some general exception handling to slow down responses on a 503 (and perhaps 429 Too Many Requests) might not be a bad idea. Can see if we're also able to enable debug logging via the minio js client. I'm marking this issue for investigation in the coming sprint and will report back. |
Another thing to keep in mind: In the past when working with other applications, SWIFT has proved to be a problem for files > 5 GB, as SWIFT expects large files to be segmented a particular way. Not sure if that might be an issue with the crawler/minio-js client/SWIFT S3 endpoint as well. For context: https://docs.openstack.org/swift/latest/overview_large_objects.html |
Thank you, I didn't know about SWIFT's large object support. (The files I had issues with were <5GB, but I might run into this issue later.) But it looks like SWIFT's S3 layer does convert multipart uploads to large object segments, so large objects should be supported when using S3. And I also see references to multipart delete in the source code, so I suppose that would be supported as well. |
Experimenting with using AWS S3 SDK instead of Minio client in this forked branch. |
Thanks for looking into this! Yes, happy to switch to the AWS S3 client instead of Minio if that works better, but I think we're generally limited to using an existing S3 client for this. I suppose you could always limit to smaller file sizes, but that may be less than ideal.. |
Thanks, @ikreymer. I'm investigating this more with our storage provider. In any case, I already see that the AWS S3 SDK handles Would you like me to prepare a pull request? (There are some things to clean up.) |
Thanks, would definitely appreciate it! There was also a request for region support in #515, and looks like you were addressing that as well. |
This should address the issue of connecting to buckets stored outside us-east-1 (#515) while the switch from Minio client to AWS SDK is being worked on (#479) Co-authored-by: Mattia <[email protected]>
During a large crawl (2GB+), I get stuck in the "Uploading WACZ" stage (using OpenStack SWIFT S3 for storage). The log shows
and the error message mentioned is
Trying to reproduce this, it appears that uploading large files triggers a slowDown response from the S3 server, which the MinIO client does not seem to handle automatically.
eventually gives the error
Amazon mentions this that 503 slow down responses can be present; see also best practices, which recommends to reduce request rate.
Do we need support for handling slowDown responses from the S3 endpoint?
The text was updated successfully, but these errors were encountered: