Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Part-completed v2 chunked upload #32567

Closed
phil-davis opened this issue Sep 4, 2018 · 8 comments
Closed

Part-completed v2 chunked upload #32567

phil-davis opened this issue Sep 4, 2018 · 8 comments

Comments

@phil-davis
Copy link
Contributor

The following unusual sequence works:

  • set a firewall rule that limits upload size
  • start a collection with MKCOL
  • upload a few chunks within the upload limit
  • upload a chunk that makes the total exceed the firewall file size limit - a 403 is returned
  • assemble your collection into a real file (success 200)

You get a real file that is smaller than the firewall upload size limit, and is a truncated version of the file that you were trying to upload.

This might happen when you are out of quota, or any other reason that the file is going to be "too big".

Is this OK, bad or just weird?

@phil-davis
Copy link
Contributor Author

phil-davis commented Sep 4, 2018

If the client gives up after getting the 403 on one of the chunks, and does not actively go back and remove the chunks uploaded so far then those chunks get left "hanging around" on the server.

If the client carries on regardless, it ends up with a truncated file.

I guess if the client provides a checksum then the final move is going to fail. So that might clean out the pending chunks?

@ownclouders
Copy link
Contributor

GitMate.io thinks possibly related issues are #16278 (Problems with Chunked upload), #31973 (MOVE on chunked upload), #5326 (web dav chunked upload tasks), #17066 (Upload movie fail : chunking loop), and #19433 (WebGUI upload never completes).

@patrickjahns
Copy link
Contributor

If the client gives up after getting the 403 on one of the chunks, and does not actively go back and remove the chunks uploaded so far then those chunks get left "hanging around" on the server.

dav:cleanup-chunks command takes care of the dangling parts. AFAIK this also runs as a background job

@phil-davis
Copy link
Contributor Author

Yes, and whatever happens there is also anyway the problem if the client just "goes away" after uploading a few chunks. For that case there is no chunk that has a "403" returned. The chunks uploaded so far "just sit" on the server, and the server has no easy way to know if the client is on the train in a tunnel and about to come out, or is gone for good.

So that cleanup job is needed anyway.

@patrickjahns
Copy link
Contributor

So to summarize - the only Problem is, when the client doesn't send a checksum and a chunk is missed.

How should ownCloud react in this scenario? cc @PVince81 @DeepDiver1975

@PVince81
Copy link
Contributor

Normally when you assemble your collection you send a header "OC-Total-Length" to specify the expected length. If the server finds that it doesn't match it will fail with 400 Bad request or similar.

@PVince81
Copy link
Contributor

other than that the server has no chance to know what the expected size is and if no checksum is sent no verification can be done.

in the future we could also decide to refuse to assemble if neither checksum nor total size was sent...

@stale
Copy link

stale bot commented Sep 20, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 10 days if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants