-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
files are uploaded to ceph even when wrong checksum given #128
Comments
GitMate.io thinks possibly related issues are #69 (Cannot upload a file of 10GB), #100 (original file disappear when upload with overwrite is stoped by files_antivirus), and #68 (Thumbnails are not updated when editing a file). |
The issue is related to the fact that we disabled part files for this storage. The upload flow is currently as follows:
We can't compute the checksum earlier without going through all bytes, would require a staging area like tmpfs. Possible solutions:
|
This also means that if there was an existing file, the contents of said file is lost, unless we can tell the object store to restore the old version. And even in the latter scenario, is this protected by transactions or is there a risk of having any client download the wrong file before the old one is restored ? |
So while a client is doing upload-overwrite of a (big) file, other clients cannot download the previous content of the file because it has been "zapped" already. If the previous file content is somehow restored from a saved version or from somewhere, then there will still be a time interval when the client sees the file "disappear" (or be locked). In that time you want to make sure that the client does not locally delete the file. Because if the big upload fails, then the previous content of the file will "reappear" and the client would then have to download it again. |
an other checksum issue #156 |
the flow might be slightly different. @sharidas found recently that we first write the oc_filecache entry and then upload the file to object store from a temporary file. This was in context of https://github.com/owncloud/enterprise/issues/3173 where a 200 status was returned despite an error. I wonder if the checksum thing might be slightly related and the proposed fix above could help solve the checksum issue as well. |
@micbar I will have a look at this issue and will update here. |
While I was trying to reproduce the problem, I did as follows:
The the file was not uploaded and the error points to the reason for failure. This was conducted with files_primary_s3. Tried without providing checksum detail to curl:
Any pointers here would be helpful, to reproduce the issue. |
Tested with ceph:
Ran the commands below:
Again not able to reproduce :( |
seems to have been fixed by owncloud/core@cbab58b need to write acceptance tests for that |
The acceptance tests were already there but were skipped because of the issue. They have been unskipped in owncloud/core#35601 |
curl -u uu0:uu0 -X PUT http://localhost/owncloud-core/remote.php/dav/files/uu0/newfile.txt -d "BBBBB" -H "OC-Checksum: SHA1:random" -v
even the response code is correct the file is uploaded
same problem when finally moving chunks to final destination
The text was updated successfully, but these errors were encountered: