Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout Error after 2 min of start of upload #43

Open
MandeepKalkhanda opened this issue Jan 10, 2017 · 17 comments
Open

Timeout Error after 2 min of start of upload #43

MandeepKalkhanda opened this issue Jan 10, 2017 · 17 comments

Comments

@MandeepKalkhanda
Copy link

Hi,

I am getting exception from AWS-SDK after 2 minute of start of upload. Uploading 1K+ files to S3.
This is occurring every time if the upload process take more then 2 minutes.

Below is the traces of the exception.
...\node_modules\aws-sdk\lib\req
uest.js:31
throw err;
^

Error: S3 headObject Error: Error: write EPROTO
at Object.exports._errnoException (util.js:870:11)
at exports._exceptionWithHostPort (util.js:893:20)
at WriteWrap.afterWrite (net.js:763:14)
at Request.callListeners (...\node_modules\aws-sdk\lib\sequential_executor.js:108:43)
at Request.emit (...\node_mo
dules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (...\node_mo
dules\aws-sdk\lib\request.js:668:14)
at Request.transition (...\n
ode_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (...\node_modules\aws-sdk\lib\state_machine.js:14:12)
at ...\node_modules\aws-sdk
lib\state_machine.js:26:10
at Request. (...
node_modules\aws-sdk\lib\request.js:38:9)
at Request. (...
node_modules\aws-sdk\lib\request.js:670:12)
at Request.callListeners (...\node_modules\aws-sdk\lib\sequential_executor.js:116:18)
at Request.emit (...\node_mo
dules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (...\node_mo
dules\aws-sdk\lib\request.js:668:14)
at Request.transition (...\n
ode_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (..\node_modules\aws-sdk\lib\state_machine.js:14:12)
at ...\node_modules\aws-sdk
lib\state_machine.js:26:10
at Request. (...
node_modules\aws-sdk\lib\request.js:38:9)
at Request. (...
node_modules\aws-sdk\lib\request.js:670:12)

@clineamb
Copy link
Owner

How big are each of these files? You might want to run the upload in batches. This could just be the entire request just timing out if it's always cutting off after 2 minutes.

@MandeepKalkhanda
Copy link
Author

@clineamb
No file is bigger than 2 MB. but the number of the files are more than thousand. Do I need to run in batches?

@clineamb
Copy link
Owner

I would recommend running in batches and see if perhaps the amount is the issue. If that's the case, yes, it would be a fix we'd need to look into or make. My recommendation is by directory, and make sure your keyTransform is per folder (probably multiple tasks, run by 1 master task).

Mind if I see some code?

@MandeepKalkhanda
Copy link
Author

MandeepKalkhanda commented Jan 11, 2017

@clineamb

I am passing complete dist folder to gulp src. In dist folder no of folder are only 5 but each folder has image files max upto 350.


   var contentMap_collection = {};
      var basePath = 'build/deploy';
      return gulp.src(`${distDir}/**/*`, {buffer:false})
        .pipe(s3({
            Bucket: 'assets-path',
            ACL: 'public-read',
            CacheControl: 'max-age=36000000, s-maxage=36000000',
            keyTransform: function(relative_filename) {
              var relative_filename = relative_filename.replace(/\\/g,"/");
              var absolute_name = '';
              if (relative_filename.indexOf('.gz') > 0) {
                absolute_name = (basePath + relative_filename).replace('.gz', '');
                contentMap_collection[absolute_name] = true;
              } else {
                absolute_name = basePath + relative_filename;
              }
              return absolute_name;
            },
            manualContentEncoding: function(keyname) {
              var contentEncoding = null;
              if (contentMap_collection[keyname] === true) {
                contentEncoding = 'gzip';
              }
              return contentEncoding;
            }
          })
        );

@clineamb
Copy link
Owner

Have you tried not setting {buffer: false}? Doing so requires gulp-s3-upload to use streams. Streams are recommended for larger files.

@MandeepKalkhanda
Copy link
Author

@clineamb

Yes I also tried with not setting {buffer: false}. But in that case also timeout error is thrown.

@MandeepKalkhanda
Copy link
Author

MandeepKalkhanda commented Jan 24, 2017

It seems that size of the file is not an issue as max size of file in project is 2MB, but number of files is very large.

@clineamb
Copy link
Owner

clineamb commented Jan 25, 2017

This might simply be AWS-SDK restrictions on upload. I'll see if I can find a place to add additional logging and do some research. Is your timeout error giving you any additional details aside from just timing out?

@MandeepKalkhanda

@MandeepKalkhanda
Copy link
Author

@clineamb I have already attached the traces of the error in first comment. Its basically from the AWS-SDK
Error: S3 headObject Error: Error: write EPROTO.

@clineamb
Copy link
Owner

@MandeepKalkhanda while this is for Lambda, it seems this is a node version error. Make sure you upgrade your node to the latest version that can work with the AWS-SDK

https://aws.amazon.com/premiumsupport/knowledge-center/write-eproto-error/

@ShadowManu
Copy link

Maybe I should open a new issue, but I'm experiencing the same issue with the same conditions: timeout at 2 minutes (not any particular big file), originally caused on aws-sdk. However, with a different stack trace:

Message:
    S3 putObject Error: TimeoutError: Connection timed out after 120000ms
    at ClientRequest.<anonymous> (/home/shadowmanu/Desktop/stable/zoi-client/node_modules/aws-sdk/lib/http/node.js:56:34)
    at ClientRequest.g (events.js:291:16)
    at emitNone (events.js:86:13)
    at ClientRequest.emit (events.js:185:7)
    at TLSSocket.emitTimeout (_http_client.js:620:10)
    at TLSSocket.g (events.js:291:16)
    at emitNone (events.js:86:13)
    at TLSSocket.emit (events.js:185:7)
    at TLSSocket.Socket._onTimeout (net.js:339:8)
    at ontimeout (timers.js:365:14)
    at tryOnTimeout (timers.js:237:5)
    at Timer.listOnTimeout (timers.js:207:5)
Details:
    domainEmitter: [object Object]
    domain: [object Object]
    domainThrown: false

@norkfy
Copy link

norkfy commented Feb 28, 2017

I have same issue as @ShadowManu

@clineamb
Copy link
Owner

clineamb commented Mar 1, 2017

This might have to do with permissions to buckets or can't connect. I'll have to dig into it further.

@clineamb clineamb reopened this Mar 1, 2017
@ShadowManu
Copy link

It normally happens when I'm uploading too many files at the same time (like an initial -vs updated- upload of a folder). So I would discard permissions. However, there may be an issue of too many concurrent uploads at the same time (and maybe can't connect).

@norkfy
Copy link

norkfy commented Mar 2, 2017

I've found solution for my problem. Need to just set right timeout in aws-sdk settings. Something like this:

const s3 = plugins.s3Upload({
  accessKeyId: '...',
  secretAccessKey: '...',
  httpOptions: {
    timeout: 1000000 //your timeout value
  },
});

@clineamb
Copy link
Owner

@ShadowManu -- I'll do a deeper dive into the SDK and check that out. I'll have to see if I can add a # of concurrent files being uploaded as an option, w/ any SDK limitations as a max. For now, can you also try @norkfy's solution above, and see if that works for you?

@ShadowManu
Copy link

I've been manually limiting which files gets uploaded so they are under the 2 minutes timeout. But I can check @norkfy's comment later today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants