-
-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nextcloud: Empty Files on chunked uploads #252980
Comments
Issue is reproducible and indeed turning on request buffering fixes the problem. I'll do some more thorough research, then we'll see if it should be optional or not. |
OK I just realized that turning on fastcgi buffering doesn't necessarily fix the issue. First of all, let me post the diff of what I have locally: diff --git a/nixos/modules/services/web-apps/nextcloud.nix b/nixos/modules/services/web-apps/nextcloud.nix
index e0a7e7d4859c..42a75d1709bb 100644
--- a/nixos/modules/services/web-apps/nextcloud.nix
+++ b/nixos/modules/services/web-apps/nextcloud.nix
@@ -714,6 +714,24 @@ in {
directive and header.
'';
};
+ enableFastcgiRequestBuffering = mkOption {
+ type = types.bool;
+ default = false;
+ apply = x: if x then "on" else "off";
+ description = mdDoc ''
+ Whether to buffer requests against fastcgi requests. This is a workaround
+ for `PUT` requests with the `Transfer-Encoding: chunked` header set and
+ an unspecified `Content-Length`. Without request buffering for these requests,
+ Nextcloud will create files with zero bytes length as described in
+ [nextcloud/server#7995](https://github.com/nextcloud/server/issues/7995).
+
+ ::: {.note}
+ Please keep in mind that upstream suggests to not enable this as it might
+ lead to timeouts on large files being uploaded as described in the
+ [administrator manual](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/big_file_upload_configuration.html#nginx).
+ :::
+ '';
+ };
};
};
@@ -1184,7 +1202,7 @@ in {
fastcgi_param front_controller_active true;
fastcgi_pass unix:${fpm.socket};
fastcgi_intercept_errors on;
- fastcgi_request_buffering off;
+ fastcgi_request_buffering ${cfg.nginx.enableFastcgiRequestBuffering};
fastcgi_read_timeout ${builtins.toString cfg.fastcgiTimeout}s;
'';
};
diff --git a/nixos/tests/nextcloud/basic.nix b/nixos/tests/nextcloud/basic.nix
index b7af6d6d7364..bf9e3fc11551 100644
--- a/nixos/tests/nextcloud/basic.nix
+++ b/nixos/tests/nextcloud/basic.nix
@@ -49,6 +49,7 @@ in {
adminpassFile = "${pkgs.writeText "adminpass" adminpass}"; # Don't try this at home!
dbtableprefix = "nixos_";
};
+ nginx.enableFastcgiRequestBuffering = true;
package = pkgs.${"nextcloud" + (toString nextcloudVersion)};
autoUpdateApps = {
enable = true;
@@ -116,5 +117,15 @@ in {
)
assert "hi" in client.succeed("cat /mnt/dav/test-shared-file")
nextcloud.succeed("grep -vE '^HBEGIN:oc_encryption_module' /var/lib/nextcloud-data/data/root/files/test-shared-file")
+
+ with subtest("Create non-empty files with Transfer-Encoding: chunked"):
+ client.succeed(
+ 'dd if=/dev/urandom of=testfile.bin bs=1M count=10',
+ 'curl --fail -v -X PUT --header "Transfer-Encoding: chunked" -d @testfile.bin "http://nextcloud/remote.php/webdav/testfile.bin" -u ${adminuser}:${adminpass}',
+ )
+
+ nextcloud.succeed('test "$(stat --printf=%s /var/lib/nextcloud-data/data/root/files/testfile.bin)" -ne 0')
+ nextcloud.succeed("ls -lah >&2 /var/lib/nextcloud-data/data/root/files/testfile.bin")
+ client.succeed("ls -lah testfile.bin >&2")
'';
})) args When you execute that, you'll realize that the local file has a size of 10MB (10x 1MB from /dev/urandom), but the remote file only has 5MB (and the hashes from So, this is not a proper solution, rather a mitigation for a subset of the files. From what I've read in https://docs.cyberduck.io/mountainduck/issues/fastcgi/ & https://sabre.io/dav/clients/finder/ this seems to be a bug (or actually an unimplemented feature) in php-fpm. I don't want to advertise enabling fastcgi_request_buffering as solution here because apparently it doesn't work properly (please double-check perhaps I'm missing something). IMHO the proper solution is to make nextcloud return a 411 in this case (which should be fixed upstream). Do the other maintainers agree with that? |
Hi, can we add this as an option? |
Which option specifically? Correction (which was made after the next comment): the |
I am going to try |
Ehh sorry, I was confused. I meant precisely this, see my VM test from above. |
I did some testing and both |
Feel free to re-use the patch I have above. However, this is not a universal solution apparently, sometimes this still causes corrupted data which is why I don't want to add it to nixpkgs. |
the nextcloud docs say this https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/big_file_upload_configuration.html#nginx about fastcgi_request_buffering and I could only find nextcloud/server#7993 about chunked_transfer_encoding. There seems to be issues with fastcgi_request_buffering, too nextcloud/server#7995 but they set it for apache somewhat recently nextcloud/server#45035 |
Describe the bug
I own an Android Tablet with Nextcloud Sync supposedly supported by the OEM. However files that are uploaded are empty (0 bytes in size). It seems to be an issue with Nextcloud, as described in the issue below.
One user suggests enabling
fastcgi_request_buffering
. I am not sure if this causes any unknown regressions, but I have not been able to configure Nextcloud to test it. My suggestion is to either expose the option via the module for users to change or evaluate changing it in nixpkgs.Upstream issue: #7995
Steps To Reproduce
Although I have not tested it with curl myself other Nextcloud users seem to have been able to reproduce the issue with this command:
Expected behavior
Nextcloud should work with chunked uploads.
Notify maintainers
@schneefux @bachp @globin @Ma27
Metadata
The text was updated successfully, but these errors were encountered: