-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: "Your socket connection to the server was not read from or written to within the timeout period." #29
Comments
What's the size of the file you're uploading? Does this error happen consistently for particular files on multiple systems? On Jan 12, 2013, at 1:40 AM, Johannes [email protected] wrote:
|
The size is about 11kb, and it's the same file for both systems. On one system (PHP 5.4.9, Ubuntu 12.04), it consistently fails with the above error, on the other system (PHP 5.4.6, Ubuntu 12.10) it consistently works. |
Ah interesting. The UploadBuilder is used for multipart uploads. The size of a multipart upload file for Amazon S3 has to be greater than or equal to 5MB (see http://docs.amazonwebservices.com/AmazonS3/latest/dev/qfacts.html). If this is in fact the error, we will see about adding a check to ensure the size of the file meets the minimum size of the uploader. |
It still puzzles me a bit that the uploader works when constructing an in memory source explicitly, do you have an idea why that is? Another thought, couldn't the uploader just choose the mechanism that is suitable for the given source? |
What does the variable |
$url is a URL, i.e. (file://some-path), which can be used with The contents of this URL do not change anymore although they might have On Fri, Feb 1, 2013 at 1:25 AM, Michael Dowling [email protected]:
|
The reason this is happening is because cURL is being told that it is going to send more data than it actually has. I'm not sure where the erroneous length is being specified, but clearing fstat cache should help to narrow down the possibilities. Does this still fail for you after the change I made to clear the fstat cache? |
I'm having the same issue when uploading an image to S3. I'm using the The issue occurs randomly. (Running on PHP 5.4.10 and Centos 6) |
@RamyTalal What version of PHP are you using, what version of cURL, and what arch is your system (i386, x86_64)? uname -a
php --version
curl --version |
|
It seems that this error occurs intermittently when uploading to Amazon S3. I've updated the Amazon S3 client to automatically retry these specific failures using exponential backoff. If you continue to see this issue, please ensure that you are not sending an incorrect Content-Length header in your requests. |
I've been noticing this a lot when using concurrency > 1, on several systems. Most of the time an uploaded folder will begin to get 400's due to timeouts after the first 10 or so requests. I did some superficial investigation into this (PHP userland layers at the lowest) and near as I can tell this may be a cURL-related bug. I certainly can't see the Guzzle library doing anything wrong. Despite giving cURL the correct content lengths, occasionally the request body reading function given to cURL by Guzzle will get called a second time after correctly signalling EOF with a blank string response. This is always followed by a 20 second stop and eventually a timeout. e.g. during PUTs in Guzzle RequestMediator ::readRequestBody (remember that curl multi is being used here so the jumbled output is just the "parallel" curl handle execution)
Note that last line has been hit twice despite the first one returning a 0-length string. My gut feeling is cURL is getting into a state where it's expecting more data and so it never concludes the request, though I can't say for sure if libcurl itself is at the root of this.
I'm not proposing a re-open of this because I can't be 100% sure, just leaving my findings behind. Someone may find my info handy if they try to pick it up later. Personally I don't know enough C to go digging around in the PHP or libcurl source. |
On some system, I get the above error when trying to upload a file to Amazon S3.
Triggering Code:
The error can be prevented by switching the source to be in memory:
Maybe you have seen this before, and have an idea how to fix this automatically?
Complete Stack Trace:
The text was updated successfully, but these errors were encountered: