-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2.0.1] Not possible to upload files: Connection Closed #3816
Comments
Could you run the client with |
How is it possible to run the client with Edit: Yes I really get HTTP/1.1 400 |
@Ninos Did you also update the server? @Ninos Sorry, thought you were on Linux on the client.. on windows you could start the client as @icewind1991 @PVince81 Is that any known server bug? |
Yes server is up2date. Not the last few days I just updated gitlab, no other packages were updated. And I need to correct my issue report. Also other files cannot be uploaded anymore. I think it's because of the connection. Here's my log (hope I found the important part):
|
Adding a 'me too'. I get the same 'connection closed' errors and then the client crashes with: 09-12 10:50:12:531 0x1051970 ASSERT: "!_runningNow" in file /usr/src/packages/BUILD/src/libsync/owncloudpropagator.cpp, line 633 If I run it with OWNCLOUD_MAX_PARALLEL=1 it works again and file are uploaded as usual. Client version 2.0.0 worked fine, broke after upgrading to 2.0.1 |
Small update: v2.0.0 is also not working. v1.8.4 is working fine. |
@moesbergen which client OS? Did you compile the ownCloud client yourself? If you enable HTTP keep alive in your server (EnableKeepAlive true, KeepAliveTimeout and KeepAliveRequests high values) does that change anything? |
I run Linux Mint 17.2. I did not compile the client myself, I added the owncloud ubuntu repo to apt-get and installed it. The server already has keepalive enabled: |
@rmoesbergen Can you also check the |
@rmoesbergen You also see it only for uploads, not downloads? |
Yes, it happens only with uploads, and only with large uploads that need to be 'chunked'. Small files upload just fine. The owncloud log on the server side gives this error when uploads fail: Exception: {"Message":"HTTP/1.1 400 expected filesize 5242880 got 2195456","Code":0,"Trace":"#0 /var/www/owncloud/lib/private/connector/sabre/file.php(100): OC\Connector\Sabre\File->createFileChunked(Resource id #26)\n#1 /var/www/owncloud/lib/private/connector/sabre/directory.php(113): OC\Connector\Sabre\File->put(Resource id #26)\n#2 /var/www/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php(1053): OC\Connector\Sabre\Directory->createFile('Sent-1-chunking...', Resource id #26)\n#3 /var/www/owncloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php(513): Sabre\DAV\Server->createFile('Documents/Mail/...', Resource id #26, NULL)\n#4 [internal function]: Sabre\DAV\CorePlugin->httpPut(Object(Sabre\HTTP\Request), Object(Sabre\HTTP\Response))\n#5 /var/www/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\n#6 /var/www/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php(469): Sabre\Event\EventEmitter->emit('method:PUT', Array)\n#7 /var/www/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php(254): Sabre\DAV\Server->invokeMethod(Object(Sabre\HTTP\Request), Object(Sabre\HTTP\Response))\n#8 /var/www/owncloud/apps/files/appinfo/remote.php(83): Sabre\DAV\Server->exec()\n#9 /var/www/owncloud/remote.php(132): require_once('/var/www/ownclo...')\n#10 {main}","File":"/var/www/owncloud/lib/private/connector/sabre/file.php","Line":347} Server version 8.1.1 |
Could you guys check the settings mentioned here: https://doc.owncloud.org/server/8.0/admin_manual/configuration_files/big_file_upload_configuration.html#apache Does changing |
For me it's not possible to add the Option |
@Ninos It could still be related to |
@RealRancor @icewind1991 @PVince81 We need some help here with regards to |
@guruz this usually means that the bytes read from the request body are less than the size specified in the Content-Length header. Something (proxy or web server module ?) might be messing with the content before it reaches the ownCloud server code. |
I do have the same issue. In my case it was a 27MB file (the owncloud client for OSX).
and this: I have this error in the owncloud logs when an upload does not succeed:
Problem though I cannot reproduce this reliable. I just restarted my Server and now the files upload. |
Here are the logs from apache regarding this file:
This is the owncloud.log for another file that could not get uploaded:
|
@guruz I've updated my owncloud.log in one of my older posts. Here's also my apache2 error.log:
In php-fpm I cannot find relevant logs:
|
I enabled debuging on the Server. Here is one of the famous errors:
And here is everything from the Server:
What I think, can clearly be seen is that the time between the first filelock |
...
I'm confused, so 8.1.3 is broken too or not? |
I use ownCloud 8.1.3 (beta) and Client 2.0.2-nightly20150929 (build 2752) and have the problems. |
I was always talking about Server Version 8.1.3, the whole time, sorry didn't I mention this? |
Sorry, hard to keep an overview here.
@maxstricker Does it work for you too if you configure like in that comment? I'm tempted to close this bug here since this seems to be a server setup problem, nothing to do with owncloud PHP (-> core bugtracker) or client :-| |
I just changed my configuration to owncloud.example.tld:4433, but the problem remains the same. |
The only thing I can recommend is that you paste your Apache virtual host configuration, maybe we see something obvious in there.. |
Below my configuration, adopted the various recommendations on similar issues found in the OC forums (KeepAliveTimeout etc.).
|
This is how mine looks like:
|
I did some extensive testing last night. I monitored #!/bin/bash
COUNTER=0
while [ $COUNTER -lt 600000 ]; do
ls -lhA /srv/owncloud-data/toni/cache/ | grep "part" >> /tmp/ocfolder.log
ls -lhA /tmp/ | grep "clamav" >> /tmp/ocfolder.log
sleep 1
let COUNTER=COUNTER+1
done
Here are the matching part files and clamav temp files. You can tell this from the size. Chunk 19:
Chunk 20:
Chunk 21:
What you can see is that the What you can also see from the timestamps is when the client receives a 400 and starts with new files (the Turbo.mkv).
|
Just on more question I did an
Why is the timestamp for all files October 3 now? Today is the 2nd. |
Because that's how the chunk expiration code does it. Then during garbage collection, it expires all chunks which mtime is in On 10/02/2015 02:53 PM, Toni Förster wrote:
|
@PVince81 thanks allot |
Just an update. As I said before I have a second server. I was working with one of the users who uses this server and this very user also got the Should I open an issue in core? Is there any way I can debug this any further? The closest to figure out what is going on was in my second last post: #3816 (comment) |
@guruz: I had a chat with @stonerl on IRC and he told me that he had a resumable situation. Basically chunks fail from times to times but the second time it works. But if within the same transfer several chunks failed, the file still gets blacklisted. @guruz does that match the current implementation of the client ? If yes I'd say the client should discard the blacklist counter value as soon as there is a successful chunk. Only if the same chunk fails over and over again, then blacklist the file. What do you think ? |
Hi, I think I'm seeing this problem, or at least something that looks a lot like it:
This is a test environment so I can happily change clients, add/remove files etc. The files in question are created from scripts triggered by cron, and those scripts have been disabled for the time being. If it would be useful I'll give more detail on the setup, log excerpts etc., or I can open another issue if that would be easier than adding to an already long thread. Thanks |
Hello, |
@andi-at that sounds not good. would you have some infos about any special ownCloud apps or webserver modules you have installed? |
hello, Found out another creepy thing, the broken file on the server is acually BIGGER! than the original?! |
I have been having this issue for weeks here. Long time since I last saw a successful sync. The separated behaviour (discover all then sync all) does not help either... The client should smart enough to scan and sync depth-first, so that it could resume faster after a failure...
EDIT: The connection close remains and the bad character is logged. I must have read a log where the connection was closed before the bad character was found... |
I tested with the latest version, and i correctly get a "Bad Request" from the server. Ignoring the character would be the opposite of #3736 |
Why is this ticket closed? Is there any solution or fix? |
@adressler the important question is whether the sync client properly resumed the upload or is it always failing ? Because the message "400 expected filesize" can happen sometimes due to timeouts and is not a bug. If the upload resumes properly during the next run, there is nothing to worry about. |
Hey there,
since v2.0.1 I cannot upload large files anymore (updated yesterday). Before I had v1.8.4, I'm not sure, sry.
Here's my error log:
ownCloud: v8.1.1
PHP: v5.6
OS: Debian Jessie
DB: MariaDB
Client OS: Windows 8.1 x64
The text was updated successfully, but these errors were encountered: