Skip to content
This repository has been archived by the owner on Feb 12, 2021. It is now read-only.

fix for no content-length in header for download #76

Merged
merged 3 commits into from
Oct 9, 2015

Conversation

kacole2
Copy link
Contributor

@kacole2 kacole2 commented Feb 6, 2015

When using a 3rd party S3 storage service (not AWS) there are instances when the server returns a header without ‘content-length’. This makes anything specifying content-length in doTheDownload function fail. After troubleshooting, some 3rd party S3 services specify transfer-encoding: chunked.

{ date: 'Fri, 06 Feb 2015 20:18:09 GMT',
  server: 'ViPR/1.0',
  'x-amz-request-id': '0a6c5fc3:14af4ae3238:f7f1:b',
  'x-amz-id-2': '97e9f1ba70052a7b85e9db09cd13f09454828f6f0a5b92e5f691c07656a7ec10',
  etag: '"e1f0060d53cbfab7f917642abaa7a1c6"',
  'last-modified': 'Fri, 06 Feb 2015 16:09:15 GMT',
  'x-emc-mtime': '1423238955945',
  'content-type': 'application/zip',
  'transfer-encoding': 'chunked' }

This workaround will repeat some of the code after verifying that the content-length == undefined. This will create a writeable stream to save a file, while still emitting progress. The downloader.progressTotal will be the chunk.length, while the downloader.progressAmount will be chunk.length -1. This keeps the total below completion(ie 99%). It's impossible to know the complete size because the content-length is not specified. When the transfer ends, +1 is added to downloader.progressAmount to be complete at 100%.

When using a 3rd party S3 storage service (not AWS), there are
instances when the server returns a header without ‘content-length’
specified and instead uses  'transfer-encoding': 'chunked’. This
workout around will  create a writeable stream to save a file and still
emit progress total, but at 99%. When the transfer ends, the progress
will add 1 to be complete at 100%
@kacole2
Copy link
Contributor Author

kacole2 commented Feb 6, 2015

added cb() to finish it out when it ends

@kacole2
Copy link
Contributor Author

kacole2 commented Jul 2, 2015

??? Just checking in on this. No movement.

@andrewrk
Copy link
Owner

andrewrk commented Sep 6, 2015

Hello. I'm sorry for taking so long to get to this. I don't have the development efforts available currently to support anything other than Amazon S3.

@faceleg
Copy link
Contributor

faceleg commented Oct 7, 2015

Please add tests to prove this does not break normal S3 and works with your 3rd party system

@kacole2
Copy link
Contributor Author

kacole2 commented Oct 8, 2015

@faceleg it does not break normal S3. i just ran the test

kcoleman-mbp:node-s3-client kcoleman$ mocha


  MultipartETag
    ✓ returns unmodified digest

  s3
    ✓ get public URL
    ✓ uploads (716ms)
    ✓ downloads (860ms)
    ✓ downloadBuffer (528ms)
    ✓ downloadStream (553ms)
    ✓ lists objects (314ms)
    ✓ copies an object (370ms)
    ✓ moves an object (1805ms)
    ✓ deletes an object (382ms)
    ✓ uploads a folder (723ms)
    ✓ downloads a folder (673ms)
    ✓ uploadDir with deleteRemoved (723ms)
    ✓ lists objects (316ms)
    ✓ downloadDir with deleteRemoved (315ms)
    ✓ upload folder with delete removed handles updates correctly (2278ms)
    ✓ multipart upload (18284ms)
    ✓ download file with multipart etag (27591ms)
    ✓ deletes a folder (1218ms)


  19 passing (58s)

@faceleg
Copy link
Contributor

faceleg commented Oct 8, 2015

Are there tests to show it works with your 3rd party system?

Thanks for letting me know it passes for normal s3

tested and works with both s3 and s3-compatible storage

Signed-off-by: Kendrick Coleman <[email protected]>
@kacole2
Copy link
Contributor Author

kacole2 commented Oct 9, 2015

i have added an endpoint parameter to run all tests against a different endpoint. I've testing with both AWS S3 and EMC ECS Test drive

$ export [email protected]
$ export S3_SECRET=MYSECRETKEY
$ export S3_ENDPOINT=object.ecstestdrive.com
$ export S3_BUCKET=s3motion
$ mocha


  MultipartETag
    ✓ returns unmodified digest

  s3
    ✓ get public URL
    ✓ uploads (2888ms)
    ✓ downloads (664ms)
    ✓ downloadBuffer (621ms)
    ✓ downloadStream (898ms)
    ✓ lists objects (478ms)
    ✓ copies an object (1205ms)
    ✓ moves an object (2429ms)
    ✓ deletes an object (863ms)
    ✓ uploads a folder (978ms)
    ✓ downloads a folder (769ms)
    ✓ uploadDir with deleteRemoved (1050ms)
    ✓ lists objects (461ms)
    ✓ downloadDir with deleteRemoved (392ms)
    ✓ upload folder with delete removed handles updates correctly (2528ms)
    ✓ multipart upload (20499ms)
    ✓ download file with multipart etag (10568ms)
    ✓ deletes a folder (1205ms)


  19 passing (50s)

@faceleg
Copy link
Contributor

faceleg commented Oct 9, 2015

Thank you for bearing with me.

I'm keen to get some CI action for this repo. Have you used http://s3ninja.net?

faceleg added a commit that referenced this pull request Oct 9, 2015
fix for no content-length in header for download
@faceleg faceleg merged commit 6dca5b8 into andrewrk:master Oct 9, 2015
@kacole2
Copy link
Contributor Author

kacole2 commented Oct 9, 2015

CI would be nice... no, i have not used s3ninja. glad to see this finally get merged

@kacole2 kacole2 deleted the fixForNoContentLength branch October 9, 2015 02:38
@faceleg
Copy link
Contributor

faceleg commented Oct 9, 2015

Thank @andrewrk for kindly agreeing to allow me to help him maintain the repo.

And you for allowing me to be really annoying and insist on more tests :P

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants