-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cancel S3 requests when dropped #794
Conversation
Today we don't cancel S3 requests when dropped. For our prefetcher that means we keep streaming (up to) 2GB of data that will never be used. This change cancels in-flight requests when dropped, so that the CRT will stop streaming them. Some bytes might still be in flight or delivered, which is fine. Canceling requests is a no-op if they've already completed. The tricky case for this change is PutObject. Our current implementation of `PutObjectRequest::write` blocks until the bytes it provides are consumed by the client. But sometimes the client might stop reading from the stream because the request has failed. That case happens to work today because we don't retain a reference to the meta request ourselves, and so the failed request's destructors run immediately after the failure, which unblocks the writer and returns it an error. But now we do hold onto a reference, and the destructors can't run until the last reference is released, so the writer is never unblocked. To fix this, we make the `write` and `complete` methods of the `PutObjectRequest` poll _both_ the write stream and the request itself in parallel. If the request completes, this gives us a chance to bail out of the write rather than blocking forever. Signed-off-by: James Bornholt <[email protected]>
Signed-off-by: James Bornholt <[email protected]>
@@ -167,9 +169,51 @@ async fn test_put_object_abort() { | |||
|
|||
drop(request); // Drop without calling complete(). | |||
|
|||
// Allow for the AbortMultipartUpload to complete. | |||
// Try to wait a while for the async abort to complete. For the larger object, this might be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to still not be enough: https://github.com/awslabs/mountpoint-s3/actions/runs/8134885229/job/22228500040?pr=794#step:8:1228
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still messing with it to confirm, but it looks like a CRT-side issue — the AbortMultipartUpload can race with in-flight UploadParts, which might succeed after the Abort succeeds and so re-create the upload. I'm not sure if that's worth fixing here (since we're not actually changing how PUT works), so I might just disable the test for large objects.
The CRT abort is best-effort -- part uploads can succeed after the Abort succeeds, which effectively recreates the MPU. This is mentioned in the AbortMultipartUpload documentation. Signed-off-by: James Bornholt <[email protected]>
Signed-off-by: James Bornholt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I was planning to put a PR up for this but the trick with PutObjectRequest
is what I was still figuring out.
The issue was addressed in Cancel S3 requests when dropped [awslabs#794](awslabs#794). Signed-off-by: Alessandro Passaro <[email protected]>
The issue was addressed in Cancel S3 requests when dropped [awslabs#794](awslabs#794). Signed-off-by: Alessandro Passaro <[email protected]>
The issue has already been addressed in Cancel S3 requests when dropped [#794](#794). --- By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and I agree to the terms of the [Developer Certificate of Origin (DCO)](https://developercertificate.org/). Signed-off-by: Alessandro Passaro <[email protected]>
Description of change
Today we don't cancel S3 requests when dropped. For our prefetcher that
means we keep streaming (up to) 2GB of data that will never be used.
This change cancels in-flight requests when dropped, so that the CRT
will stop streaming them. Some bytes might still be in flight or
delivered, which is fine. Canceling requests is a no-op if they've
already completed.
The tricky case for this change is PutObject. Our current implementation
of
PutObjectRequest::write
blocks until the bytes it provides areconsumed by the client. But sometimes the client might stop reading from
the stream because the request has failed. That case happens to work
today because we don't retain a reference to the meta request ourselves,
and so the failed request's destructors run immediately after the
failure, which unblocks the writer and returns it an error. But now we do
hold onto a reference, and the destructors can't run until the last
reference is released, so the writer is never unblocked. To fix this, we
make the
write
andcomplete
methods of thePutObjectRequest
pollboth the write stream and the request itself in parallel. If the request
completes, this gives us a chance to bail out of the write rather than
blocking forever.
Relevant issues: fixes #510.
Does this change impact existing behavior?
No.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and I agree to the terms of the Developer Certificate of Origin (DCO).