Skip to content
This repository has been archived by the owner on Nov 20, 2018. It is now read-only.

5 - Support uploads to S3 via CloudFront distribution #1016

Closed
rnicholus opened this issue Oct 16, 2013 · 74 comments
Closed

5 - Support uploads to S3 via CloudFront distribution #1016

rnicholus opened this issue Oct 16, 2013 · 74 comments
Assignees

Comments

@rnicholus
Copy link
Member

The current plan is to support simple (non-chunked) uploads to a CloudFront distribution. Chunked uploads are currently not possible when targeting a CloudFront distribution since CloudFront rips off the Authorization header containing the signature before forwarding the request on to S3. The Authorization header is a required field in the request when using any of the S3 multipart upload REST calls, which are needed to support Fine Uploader S3's chunking and auto-resume features. I have opened a request in the CloudFront forums asking for this behavior to be modified so multipart uploads requests can target a CloudFront distribution.

The planned support for this is still mostly undetermined, and it is also not determined if this support will be part of 4.0. Making this part of 4.0 is possible, but looking less likely as I run into issues with CloudFront's handling of upload-related requests. I'm currently struggling to get path patterns for upload requests to work. I've opened another thread in the forum detailing my issue at https://forums.aws.amazon.com/thread.jspa?threadID=137627&tstart=0.

@ghost ghost assigned rnicholus Oct 16, 2013
@rnicholus
Copy link
Member Author

No useful responses from Amazon in the forums. This is going to be postponed until a later release.

@rnicholus
Copy link
Member Author

Amazon has confirmed that they have no plans to stop removing the Authorization header from requests. This means that we will not be able to make use of the multipart upload API via a CloudFront distribution.

@jasonshah
Copy link

So, is it fair to say that simple uploads to Cloudfront are not yet supported? We are evaluating uploaders, and one of our customers wants to upload huge (multi-GB) files. Uploading to S3 with Fine Uploader is proving to be severely limited by their proxy, and I wanted to try uploads straight to Cloudfront.

@rnicholus
Copy link
Member Author

The only possible way to upload files that target a CloudFront distribution is to send the entire file in one multipart encoded POST request. This means that chunking, resume, and pause features are not possible. If a file fails mid-upload, each retry attempt will need to start over from the first byte. In fact, no credentialed REST calls are possible at all, since AWS rips off the Authorization header.

If you do not enable the chunking/resume features in Fine Uploader S3, theoretically, uploads to a CF distribution should work, but there is another problem: Fine Uploader S3 expects to be able to determine the bucket name from your endpoint. If you are uploading to a CF distribution, this assumption is no longer valid. We would have to allow a bucket name to be specified via an option and API method for CF distro endpoints.

@rnicholus
Copy link
Member Author

@jasonshah What sort of limits does your customer's proxy enforce?

@jasonshah
Copy link

@rnicholus the customer's proxy has some kind of bandwidth throttling in place. In one location, they get 3.5MB/s up (which is very acceptable), whereas in another they get <100K/s up (which isn't). Running some traceroutes and nslookups against the slow computer reveals a proxy server in the way, which is likely limiting bandwidth in some way. We've asked to see if there's a way to get in touch with their network engineers, but it's a huge company and they treat that as a last resort. So one theory is that perhaps uploading to Cloudfront might help speed this up.

@rnicholus
Copy link
Member Author

@jasonshah If uploading to CF fixes the issue, I can see why that might be appealing. The reason I haven't pursued support for CF in Fine Uploader S3 is due to a few reasons:

  • I see chunking, resume, and efficient retries as very important features. These would have to be turned off if uploading to CF.
  • Since a bucket name would have to be explicitly specified alongside a CF endpoint URL, this adds more complexity and confusion to the API/options.
  • There are other outstanding issues with CF that make uploads potentially difficult, such as this one: https://forums.aws.amazon.com/thread.jspa?threadID=137627&tstart=0.
  • AWS doesn't seem interested in supporting the upload-to-CF workflow. There's no telling what other issues may appear over time if we explicitly support this.

@jasonshah
Copy link

@rnicholus that last issue seems to be the biggest one. The feature was announced four months ago, however I've yet to find a single example of how this might work from them (though perhaps I haven't looked hard enough).

Thanks for thinking of it. We'll keep looking to see if we can solve this problem another way.

@jasonshah
Copy link

@rnicholus Some interesting data, FYI: our customer demonstrated that using the Fine Uploader test to upload from NYC to US East (Virginia), he can achieve ~3-7MB/s. Using another product's uploader, which does a simple PUT to a CDN (EdgeCast in this case), he can achieve 55MB/s. The CDN can provide huge speed increases.

@rnicholus
Copy link
Member Author

I'm afraid I'm not familiar with this CDN. Is the customer cutting AWS out of the picture entirely?

@pulkitjalan
Copy link

Amazon have now said that cloudfront does not remove the Authorization header on PUT, POST, PATCH, DELETE, and OPTIONS.

https://forums.aws.amazon.com/message.jspa?messageID=528729#528729

@rnicholus
Copy link
Member Author

Excellent. Thanks for the update. That should make it possible for us to modify Fine Uploader S3 to allow uploads through a CF distribution, theoretically.

@pulkitjalan
Copy link

I was testing this with fine uploader and I ran into another issue. It was to do with the fact that cloudfront adds the 'X-Amz-Cf-Id' header to the request. I got past this issue by using Origin Access Identities as outlined in this forum post: https://forums.aws.amazon.com/thread.jspa?messageID=345913&#345913.

Looking forward it seeing this feature in fine uploader :)

@jasonshah
Copy link

@pulkit-clowdy after setting up the OAI, you were able to use FineUploader to upload to S3 via CloudFront?
If yes, did you experience any performance improvements?

@pulkitjalan
Copy link

I was able to upload via cloudfront and use chunking. Yes there was a significant improvement in performance considering my bucket is in the us-east-1 region and im uploading from the UK.

Direct to S3 = ~1MB/s
S3 via Cloudfront = ~4-6MB/s

@pulkitjalan
Copy link

At the moment its quite hacky to get this working, almost all security checks have to be disabled.
Is this feature going to be implemented into fineuploader and if so, which version it is planned?

@rnicholus
Copy link
Member Author

Probably 5.1. 5.0 is currently in development.

On Sunday, March 23, 2014, Pulkit Jalan [email protected] wrote:

At the moment its quite hacky to get this working, almost all security
checks have to be disabled.
Is this feature going to be implemented into fineuploader and if so, which
version it is planned?

Reply to this email directly or view it on GitHubhttps://github.com//issues/1016#issuecomment-38382491
.

@pulkitjalan
Copy link

Ok, thanks for the update

@rnicholus rnicholus changed the title Support uploads to S3 via CloudFront distribution 5 - Support uploads to S3 via CloudFront distribution May 28, 2014
@cybertrand
Copy link

First of all, thank you @rnicholus for building such a useful piece of software. I'm also looking to use Fine Uploader to upload content to S3 via CloudFront. I understand from this thread that this should be released in 5.1 and was wondering if you had an idea when that might be?

The company I work for is looking to implement a Web uploader with this specific feature. We might be able to contribute to the project to help develop it if that's something you're interested in.

@rnicholus
Copy link
Member Author

Thanks for the kind words @cybertrand. Fine Uploader wouldn't be where it is today without my employer, Widen, and the development help of @feltnerm along with the input of @uriahcarpenter as well.

You are correct that this feature is scheduled for 5.1, along with several others. I don't think this will be terribly difficult, assuming there aren't any further hidden obstacles (such as the issue where CF stripped Authorization headers a while back, making this impossible - since fixed).

We are currently working on a hotfix and some administrative (non-feature) tasks at the moment. Once those are complete, #1198 is first in line, followed by uploads to CF.

I suspect that the code changes to Fine Uploader S3 will be minimal to support uploads to CF. One thing that will need to change is the code that uses the AWS endpoint to determine the bucket name. You see, we must embed the bucket name in the request, and, with uploads directly to S3, we can programmatically determine the bucket name simply by looking at the S3 endpoint URL. With uploads to a CF distro, that will no longer be a safe assumption, so we will need to solicit the actual bucket name from integrators via an additional option (and provide an API method for dynamic adjustment).

@cybertrand
Copy link

Thank you very much for the additional information @rnicholus; makes sense about the S3 bucket name. It's great to hear that this should be straightforward to implement and that it should happen soon. On that last note, are you able to share any rough timeframes: are we talking about 1, 3, 6 months? I'm asking so that we can make the best decision on waiting vs. implementing now.

I was also wondering: will the implementation of this feature include the ability to upload multiple chunks/parts of the same file in parallel? (to S3 via CloudFront). That's what we're after in order to accelerate file uploads and this would provide a really nice commodity solution, instead of having to buy and implement a UDP based transfer acceleration solution. Note: it would be really useful to have a similar solution for accelerated downloads, whereby your JS client would perform multiple HTTP Range requests to CloudFront/S3 in parallel do download a single large file (I believe both S3 and CloudFront support this).

Thanks again!

@rnicholus
Copy link
Member Author

Have you read about the concurrent chunking feature we released in 5.0?
That allows multiple chunks for a single large file to be uploaded in
parallel to any endpoint. That feature is aimed at single-file large
uploads, since multiple chunks have always been uploaded in parallel (one
per file) if multiple files are selected.
http://docs.fineuploader.com/branch/master/features/concurrent-chunking.html

On Wed, Jun 25, 2014 at 3:22 PM, cybertrand [email protected]
wrote:

Thank you very much for the additional information @rnicholus
https://github.com/rnicholus; makes sense about the S3 bucket name.
It's great to hear that this should be straightforward to implement and
that it should happen soon. On that last note, are you able to share any
rough timeframes: are we talking about 1, 3, 6 months? I'm asking so that
we can make the best decision on waiting vs. implementing now.

I was also wondering: will the implementation of this feature include the
ability to upload multiple chunks/parts of the same file in parallel? (to
S3 via CloudFront). That's what we're after in order to accelerate file
uploads and this would provide a really nice commodity solution, instead of
having to buy and implement a UDP based transfer acceleration solution.
Note: it would be really useful to have a similar solution for accelerated
downloads, whereby your JS client would perform multiple HTTP Range
requests to CloudFront/S3 in parallel do download a single large file (I
believe both S3 and CloudFront support this).

Thanks again!


Reply to this email directly or view it on GitHub
#1016 (comment)
.

@cybertrand
Copy link

I did read about it @rnicholus and was wondering if it would be supported for uploads to S3 via CF and for a single file specifically.

I ran some tests doing multipart uploads of single large files to S3 with 3-5 chunks in parallel via a native client app (Cyberduck, also open source) from Los Angeles to a S3 bucket in US Standard and can vouch for the fact that it yields huge gains in throughput (consistently reaching 90+Mbps on a dedicated 100Mbps). Similarly, although a single file/chunk upload to CF will yield much better throughput than one to S3 (since it's uploading to a local CF node), transfers could be further accelerated by allowing multiple chunks in parallel. I don't know how hard this would be to implement and if it requires a lot of changes compared to what you built for S3, just some food for thoughts..

I can also say from my experience that your call to only implement this feature with chunking in order to allow pause/resume as well as the automatic retry of failed upload was a good call. We've had recurring issues uploading single large files to S3 via another CDN: some uploads to S3 fail, our CDN provider confirms a failed PUT to S3 but we're not get any additional information from Amazon...

Sorry, don't mean a nag, but are you able to share any timeframe for the release of 5.1? Thanks a lot for responding so quickly to everything!

@rnicholus
Copy link
Member Author

The concurrent chunking feature was implemented as a core feature in Fine Uploader 5.0. This means that it is supported for all endpoint types: Azure, S3, custom/traditional. I see no reason why a CF distro would be a special case.

I don't yet have a time estimate for 5.1. Stay tuned, and I'll try to post it when I know more.

@cybertrand
Copy link

Great. I'll watch the thread for updates on 5.1. Thanks again!

@rbliss
Copy link

rbliss commented May 18, 2015

I've been testing the x-amz-meta-#### headers, and they're working great. Other headers appear to come through fine too.

After further discussion with the guys at Amazon, it appears the specific issue I was having, where the headers don't appear to come through, is really due to setting up an 'Origin Access Identity' (OAI) on Cloudfront. You have to be very careful when you setup an OAI to understand what's going on. An OAI will cause Cloudfront to behave almost as if a different user (the OAI) is creating the S3 object, rather than the user whose access key you specify in FineUploader.

An OAI is optional, but If using an OAI, you should use the acl of bucket-owner-full-control instead of private to get an S3 object that resembles what would normally happen if you hit S3 directly.

Other than that, I'm happy with the results I'm seeing from Cloudfront and it appears the earlier issues are no longer a problem.

@rnicholus
Copy link
Member Author

My understanding was that an OAI is required when making signed requests, otherwise the additional headers attached to the request by CF will be rejected by S3 as they are not accounted for in the signature.

In the case of "bucket-owner-full-control", I'm guessing the "owner" is CF when using an OAI. Is this your understanding? If this is true, then based on the Canned ACL descriptions, public-read will result in the same problem for the bucket owner it seems.

@rbliss
Copy link

rbliss commented May 18, 2015

Just a point of clarification: by signed requests, are you referring to the signed Authorization header required to authenticate a request in the S3 REST API, or signed urls used to restrict access to objects in S3?

@rnicholus
Copy link
Member Author

The former. This is how Fine Uploader signs all upload requests.

On Mon, May 18, 2015 at 4:05 PM, Richard Bliss [email protected]
wrote:

Just a point of clarification: by signed requests, are you referring to
the signed Authorization header required to authenticate a request in the
S3 REST API, or signed urls used to restrict access to objects in S3?


Reply to this email directly or view it on GitHub
#1016 (comment)
.

@rbliss
Copy link

rbliss commented May 18, 2015

Just wanted to double check. You do not need an OAI to send signed requests. In fact, if you turn off the OAI (or never set one up) on your Cloudfront distro, everything should just work like you were interacting with S3 directly.

@rnicholus
Copy link
Member Author

That wasn't my experience before. In fact, the use of OAIs in this context
was discussed on the AWS forums. One example can be found at
https://forums.aws.amazon.com/thread.jspa?messageID=345913&#345913. I'll
have to try again without an OAI at some point.

On Mon, May 18, 2015 at 4:24 PM, Richard Bliss [email protected]
wrote:

Just wanted to double check. You do not need an OAI to send signed
requests. In fact, if you turn off the OAI (or never set one up) on your
Cloudfront distro, everything should just work like you were
interacting with S3 directly.


Reply to this email directly or view it on GitHub
#1016 (comment)
.

@jasonshah
Copy link

@rbliss Have you had a chance to measure what kind of performance or stability gains you are seeing with uploads to Cloudfront vs. uploads to S3?

@rbliss
Copy link

rbliss commented May 18, 2015

@jasonshah I haven't had a chance. Anecdotally, it does seem faster
@rnicholus Definitely give it a shot without OAI. Life will be glorious.

@rbliss
Copy link

rbliss commented May 19, 2015

I do have to warn anyone attempting to use Cloudfront, Cloudfront is a bit fiddly in terms of configuration. If you're having issues, it's likely with how you've setup Cloudfront. I can write up configuration details if anyone is testing this.

@winzig
Copy link

winzig commented May 28, 2015

I've been following this thread for a while with great interest. It sounds as if this is now working, but it's not totally clear what the procedure is to set CF up to use with FineUploader. Will this be documented?

@rnicholus
Copy link
Member Author

Once this is tested and verified on our end, we'll update the S3 Uploads feature page with CF-specific details.

@rbliss
Copy link

rbliss commented May 28, 2015

Here are my notes on setting up a proper CF distribution:

Origin Settings

  • Restrict Bucket Access (shows up after you choose your S3 bucket) - Easiest if you choose No see Notes on OAI below.

Default Cache Behavior Settings

  • Allowed HTTP Methods - Must choose GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE else you can't create an S3 object.
  • Forward Query String - Must choose Yes, else chunked uploads will fail.
  • Restrict Viewer Access - Easiest if you choose No else you'll have to do a lot of tweaking to send a signed Cloudfront URL in addition to all the S3 url parameters.

Notes on OAI: If you do setup an OAI, make sure to set FineUploader's objectProperties.acl to bucket-owner-full-control or appropriate equivalent. A Cloudfront distribution with an OAI setup on an origin pointing to a S3 bucket will cause Cloudfront to act as the OAI user when interacting with the S3 bucket regardless of the access key specified in FineUploader. Hence any objects with an acl of private will belong to Cloudfront's OAI user and not the access key's user.

@ludofleury
Copy link

My bad, typo. @rbliss I confirm that with your informations, the upload is going smoothly through Cloudfront to S3. Thank you very much.

@rnicholus
Copy link
Member Author

Sounds like this is finally working with Fine Uploader S3. I'm continually
busy with support and licensing requests, as well as the S3 V4 support
feature, but it is still on my radar to verify and update the docs.

On Mon, Oct 5, 2015 at 9:36 AM Ludovic Fleury [email protected]
wrote:

My bad, typo. @rbliss https://github.com/rbliss I confirm that with
your information, the upload is going smoothly through Cloudfront to S3.
Thank you very much.


Reply to this email directly or view it on GitHub
#1016 (comment)
.

@andreij
Copy link

andreij commented Nov 10, 2015

I don't know if it's the right place to ask, but maybe your answers can help other people stuck like me.

I have set up a basic uploader page, S3 + CORS and works as expected.

When I switch to the CDN upload, the response is:

<?xml version="1.0" encoding="UTF-8"?>
<Error>
   <Code>MethodNotAllowed</Code>
   <Message>The specified method is not allowed against this resource.</Message>  
   <Method>POST</Method>
   <ResourceType>OBJECT</ResourceType>
   <RequestId>2B0507CBD403F9C3</RequestId>    
 <HostId>nEV/gHkAE3wTCKP+ZJzV/VQXNEvFP+/w2UZtyIwTrkkV+/i9aFK/Ap8qTfGfDZ+PAl2kAYD7vE4=</HostId>
</Error>

The response header are:

Access-Control-Allow-Method POST, PUT, DELETE
Access-Control-Allow-Origin *
Access-Control-Max-Age 3000
Allow GET, DELETE, HEAD, PUT
Connection keep-alive
Content-Type application/xml
Server AmazonS3
Transfer-Encoding chunked
Vary    Origin, Access-Control-Request-Headers, Access-Control-Request-Method
Via 1.1 5d53b9570a535c2d94ce93c20abbd471.cloudfront.net (CloudFront)
X-Amz-Cf-Id dw-N_pLmOmEqlbri-B2l7l2a6TGWm2_tAhr5y9_InMU8ZuHYunZSEw==
X-Cache Error from cloudfront

Below the default behavior configuration:

schermata 2015-11-10 alle 16 13 49

No OAI set.

What I have changed into fineuploader conf is:

request: {
  endpoint: 'dqu6rri____.cloudfront.net',
},             
objectProperties: {
  bucket: 'fineup____-test'
},

Bucket policy:

{
    "Version": "2012-10-17",
    "Id": "Policy1447166234666",
    "Statement": [
        {
            "Sid": "Stmt1447166228927",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::fineup____-test/*"
        }
    ]
}

Cors:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <ExposeHeader>ETag</ExposeHeader>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

The bucket is located in Ireland.

I'm quite clueless, so any help on how to track down the problem would help.

@rnicholus
Copy link
Member Author

Youlll need to fix your comment above. Take another look.

@rnicholus
Copy link
Member Author

Either way, this is a question for the AWS forums, as it appears to be some type of configuration issue on your end.

@andreij
Copy link

andreij commented Nov 10, 2015

Dear @rnicholus if you think this is the wrong place I can remove it in order to not clutter the issue's conversation.

@rbliss
Copy link

rbliss commented Nov 10, 2015

@andreij Place it in AWS Cloudfront forums and I'll respond.

@andreij
Copy link

andreij commented Nov 10, 2015

@rbliss Thanks! Here the follow up https://forums.aws.amazon.com/thread.jspa?threadID=219620

@nalbion
Copy link
Contributor

nalbion commented Oct 6, 2017

@rnicholus @rbliss I'm getting an error when I try to upload large files to S3 through Cloudfront:

<Error>
  <Code>AccessDenied</Code>
  <Message>There were headers present in the request which were not signed</Message>
  <HeadersNotSigned>x-amz-cf-id</HeadersNotSigned>
  <RequestId>123</RequestId>
  <HostId>abc=</HostId>
</Error>

FineUploader config:

var endpoint = 'mydomain.com';
var uploader = new qq.s3.FineUploader({
            // debug: true,
            element: el,
            request: {
                endpoint: 'https://' + endpoint,
                accessKey: s3AccessKey,
                params: metadata
            },
            chunking: {
                enabled: true
            },
            objectProperties: {
                bucket: bucket,
                host: endpoint,
                region: 'ap-southeast-2',
                serverSideEncryption: true
            },
            signature: {
                endpoint: domain + '/file-uploads/sign-request',
                version: 4,
                customHeaders: customHeaders
            },

Cloudformation template:

Resources:
  Distribution:
    Type: "AWS::CloudFront::Distribution"
    Properties:
      DistributionConfig:
        Aliases:
          - !Sub files.${HostedZone}
        DefaultCacheBehavior:
          AllowedMethods:
            - "DELETE" - "GET" - "HEAD" - "OPTIONS" - "PATCH" - "POST" - "PUT"
          CachedMethods:
            - "GET" - "HEAD" - "OPTIONS"
          Compress: true
          ForwardedValues:
            QueryString: true
          TargetOriginId: "Upload Bucket"
          ViewerProtocolPolicy : "redirect-to-https"
        Enabled: true
        HttpVersion: "http2"
        Origins:
          - DomainName: !Sub ${BucketName}.s3-ap-southeast-2.amazonaws.com
            Id: "Upload Bucket"
            CustomOriginConfig:
              HTTPPort: 80
              HTTPSPort: 443
              OriginProtocolPolicy: https-only
        ViewerCertificate:
          AcmCertificateArn: !Ref Certificate
          SslSupportMethod: sni-only

  Bucket:
    Type: "AWS::S3::Bucket"
    Properties:
      BucketName: !Ref BucketName
      CorsConfiguration:
        CorsRules:
          - AllowedOrigins:
              - "*"
            AllowedHeaders:
              - "*"
            AllowedMethods:
              - "GET" - "POST" - "PUT" - "DELETE"
            ExposedHeaders:
              - "Date"
              - "ETag"

  S3BucketPolicy:
    Type: "AWS::S3::BucketPolicy"
    DependsOn: Bucket
    Properties:
      Bucket:
        !Ref BucketName
      PolicyDocument:
        Statement:
          - Sid: DenyIncorrectEncryptionHeader
            Action:
              - "s3:PutObject"
            Effect: "Deny"
            Resource: !Sub arn:aws:s3:::${BucketName}/*
            Principal: "*"
            Condition:
              StringNotEquals:
                s3:x-amz-server-side-encryption:
                  - "AES256"
          - Sid: DenyUnEncryptedObjectUploads
            Action:
              - "s3:PutObject"
            Effect: "Deny"
            Resource: !Sub arn:aws:s3:::${BucketName}/*
            Principal: "*"
            Condition:
              StringEquals:
                s3:x-amz-server-side-encryption:
                  - !Ref AWS::NoValue

@nalbion
Copy link
Contributor

nalbion commented Oct 6, 2017

Okay... I changed

- DomainName: !Sub ${BucketName}.s3-ap-southeast-2.amazonaws.com

to

- DomainName: !Sub ${BucketName}.s3.amazonaws.com

(removed the region) and ...I've moved onto the next error: The request signature we calculated does not match the signature you provided. Check your key and signing method.

The error says that the CanonicalRequest included host:my-bucket.s3.amazonaws.com

...oh - objectProperties.host should be my-bucket.s3.amazonaws.com, not endpoint - it's all working now 🍾 🎉

fragilbert added a commit to fragilbert/file-uploader that referenced this issue Aug 10, 2019
* refactor(concurrent chunking): too much logging
FineUploader#1519

* fix(concurrent chunking): account for cancelled chunk uploads that are still waiting for signature
This should prevent a chunk upload that has not yet called xhr.send() from starting if it has been cancelled by error handling code.
FineUploader#1519

* docs(concurrent chunking): prepare for 5.5.1 release
Removed temporary logs.
fixes FineUploader#1519

* docs(issues and PRs): first set of issue/PR templates
[skip ci]

* fix(s3): form uploader may not verify upload w/ correct bucket name
fixes FineUploader#1530

* docs(delete files): clarify `forceConfirm` option.
fixes FineUploader#1522

* docs(traditional): broken link to chunking feature page

* chore(release): prepare for 5.5.2 release

* docs(options): session option typo

[skip ci]

* feat(initial files): Allow initial files to be added via the API
Introduces a new API method - addInitialFiles.
closes FineUploader#1191

* docs(initial files): Update initial files feature page
closes FineUploader#1191
[skip ci]

* feat(button.js): Allow <input type="file"> title attr to be specified
This also account for extraButtons.
closes FineUploader#1526

* docs(options): typos
[skip ci]
FineUploader#1526

* docs(README): semver badge link is broken, not needed anymore anyway
[skip ci]

* chore(build): prepare for 5.6.0 release
[skip ci]

* chore(build): trivial change to re-run travis build

* docs(delete file): clearer docs for proper callback use

* chore(build): remove PR branch check - just gets in the way now

* chore(php): update local testing env to latest Fine Uploader PHP servers

* feat(Edge): DnD support for Microsoft Edge
Also adjusted an unreliable test. This was tested in Edge 13.10586.
FineUploader#1422

* docs(Edge): List Edge 13.10586+ as a supported browser
FineUploader#1422

* chore(build): Mark build 1 of 5.7.0
FineUploader#1422

* chore(build): prepare for 5.7.0 release
FineUploader#1422

* Pass xhr object through error handlers

* chore(release): prepare for 5.7.1. release
FineUploader#1599

* chore(release): build release branches too

* docs(contributing): attempt to re-integrate clahub.com

* feat(commonjs): CommonJS + AMD support
FineUploader#789

* feat(CryptoJS, ExifRestorer): Move 3rd-party deps to qq namespace
FineUploader#789

* refactor(concat.js): cleanup banner/footer for bundles
FineUploader#789

* fix(concat.js): don't add modules JS to css files
Also removed bad package.json main property.
FineUploader#789

* fix(concat.js): lint errors
FineUploader#789

* feat(CommonJS): more elegant importing of JS/CSS
FineUploader#789

* chore(build): prepare to publish 5.8.0-1 pre-release
FineUploader#789

* chore(build): prepare to publish 5.8.0-beta1 pre-release
FineUploader#789

* docs(README): gitter chat shield

* feat(build): better name for modern row-layout stylesheet
Only used in new lib directory.
FineUploader#1562

* docs(modules): Feature page and links to feature in various places
FineUploader#1562

* docs(version): Prepare for 5.8.0 release
FineUploader#1562

* fix(build): node version is too old to run updated build

* docs(features): Link to modules feature page.
FineUploader#1562

* fix(build): we are tied to node 0.10.33 ATM
FineUploader#1562

* chore(MIT): start of switch to MIT license
FineUploader#1568

* chore(MIT): better build status badge
FineUploader#1568

* docs(README): horizontal badges

FineUploader#1568

* status(README): license and SO badges
FineUploader#1568

* docs(README): fix license badge

FineUploader#1568

* chore(build): update license on banner
FineUploader#1568

* docs(README): add contributing section
FineUploader#1568

* chore(build): install grunt-cli
FineUploader#1568

* chore(git): ignore iws files
FineUploader#1568

* docs(index): update index page & footer
FineUploader#1568

* docs(support): simplify menu
FineUploader#1568

* docs(README): more info on contributing

* docs(README): grammar

* fix(spelling): various typos in tests, comments, docs, & code 

FineUploader#1575

* chore(build): start of 5.10.0 work

* feat(scaling): Allow an alternate library to be used to generate resized images

FineUploader#1525

* docs(scaling & thumbnails): 3rd-party scaling doc updates (FineUploader#1586)

FineUploader#1576

* chore(build): prepare for 5.10.0 release

* fix(session): Session requester ignores cors option (FineUploader#1598)

* chore(build): start of 5.11.0 changes
FineUploader#1598

* docs(events.jmd): typo

[skip ci]

* docs(delete.jmd): add missing backtic
[skip ci]

* docs(delete.jmd): add missing backtic
[skip ci]

* fix(image): Fixed a problem where image previews weren't being loaded (FineUploader#1610)

correctly if there was a query string in the URL

* docs(README): fix Stack Overflow badge

[skip ci]

* docs(README): fix Stack Overflow badge

[skip ci]

* chore(build): prepare for 5.10.1 release
[skip ci]

* docs(features.jmd): Document S3 Transfer Acceleration to S3 feature (FineUploader#1627)

Also removed mention of CF craziness. 
closes FineUploader#1556 
closes FineUploader#1016

* feat(build) drop grunt, core & dnd builds (FineUploader#1633)

closes FineUploader#1569 
closes FineUploader#1605 
close FineUploader#1581 
closes FineUploader#1607

* Revert "FineUploader#1569 build cleanup" (FineUploader#1634)

* feat(build) drop grunt, core & dnd builds (FineUploader#1635)

closes FineUploader#1569 
closes FineUploader#1605 
closes FineUploader#1581 
closes FineUploader#1607

* docs(README.md): better build instructions

FineUploader#1569

* refactor(Makefile): caps to lower-case
FineUploader#1569

* fix(Makefile): bad syntax in publish recipe
FineUploader#1569

* feat(Makefile): more comprehensive publish recipe
FineUploader#1569

* fix(CommonJS): missing core aliases
fixes FineUploader#1636

* fix(CommonJS): traditional should be default
fixes FineUploader#1636

* docs(modules.jmd): mention core builds, fix script paths
fixes FineUploader#1636

* docs(modules.jmd): more script path fixes
fixes FineUploader#1636

* fix(lib/core): wrong path for core module `require` statements
fixes FineUploader#1637

* chore(Makefile): allow publish simulation
`make publish simulation=true`

* fix(Makefile): traditional endpoint jquery js files missing
fixes FineUploader#1639

* fix(Makefile): traditional endpoint jquery js files missing from zip
fixes FineUploader#1639

* docs(README.md): better quality logo

[skip ci]

* docs(README.md): remove freenode chat badge

[skip ci]

* fix(Makefile): jQuery S3 & Azure are missing plug-in aliases
fixes FineUploader#1643

* fix(Makefile): jQuery S3 & Azure are missing plug-in aliases
fixes FineUploader#1643

* feat(getting started): better getting started guide (FineUploader#1651)

FineUploader#1646

* docs(README.md): update changelog link

[skip ci]

* docs(README.md): update changelog link

[skip ci]

* docs(README.md): remove freenode chat badge

[skip ci]

* fix(Makefile): uploader doesn't load in IE8/9
fixes FineUploader#1653

* fix(azure/uploader.basic): customHeaders omitted from delete SAS

closes FineUploader#1661

* chore(build): prepare for 5.11.8 release
FineUploader#1661

* fix(s3-v4) Invalid v4 signature w/ chunked non-ASCII key (FineUploader#1632)

closes FineUploader#1630

* chore(build): start of 5.11.9 release work
FineUploader#1632

* chore(Makefile): make it easier to start local test server

* fix(request.signer.js): Client-side signing errors don't reject promise (FineUploader#1666)

This is yet another instance that details why I would like to rip out `qq.Promise` and instead depend on native `Promise` (or require a polyfill).

* Update docs for retry feature (FineUploader#1675)

event onRetry => onAutoRetry

* docs(getting started): page 1 code example typos (FineUploader#1677)

closes FineUploader#1676

* docs(getting started): page 1 code example typos (FineUploader#1677)

closes FineUploader#1676

* docs(forms.jmd): Minor edit to fix invalid example code (FineUploader#1679)

[skip ci]

* docs(options-s3.jmd): Remove console.log on S3 Options page (FineUploader#1681)

[skip ci]

* fix(upload.handler.controller): deleted file doesn't fail fast enough
Now, if a zero-sized chunk is detected (which happens if the file is deleted or no longer available during the upload), the upload will be marked as failing.
fixes FineUploader#1669

* fix(uploader.basic.api.js): file marked as retrying too late
Should happen before the wait period, not after.
fixes FineUploader#1670

* docs(statistics-and-status-updates.jmd): update retrying status def
FineUploader#1670
[skip ci]

* docs(options-ui.jmd): remove duplicate option
FineUploader#1689
[skip ci]

* docs(options-azure.jmd): typo
FineUploader#1689
[skip ci]

* docs(qq.jmd): invalid docs for qq.each iterable param
FineUploader#1689
[skip ci]

* docs(02-setting_options-s3.jmd): Add comma to object literal (FineUploader#1694)

(now the snippet is valid JavaScript)

* fix(Makefile): identify.js included twice (FineUploader#1691)

This does not appear to cause any issues, but it does inflate the size of all built JS files a bit.

* fix(Makefile): $.fineUploaderDnd missing from jQuery builds
fixes FineUploader#1700

* chore(build): field testing for 5.11.10 before release
FineUploader#1691
FineUploader#1700

* chore(build): release 5.11.10
FineUploader#1691
FineUploader#1700

* docs(README.md): add twitter shield

[skip ci]

* Update dependencies to enable Greenkeeper 🌴 (FineUploader#1706)

* docs(README.md): add twitter shield

[skip ci]

* chore(package): update dependencies

https://greenkeeper.io/

* chore(package.json): start of v5.12.0

[skip ci]

* chore(version.js): start of v5.12.0

* feat(validation): Allow upload with empty file (FineUploader#1710)

Don't reject an empty file if `validation.allowEmpty` is set to `true`.
closes FineUploader#903 
closes FineUploader#1673

* chore(Makefile): test servers may not start without changes

* Update karma to the latest version 🚀 (FineUploader#1721)

* chore(package): update clean-css to version 3.4.24 (FineUploader#1723)

https://greenkeeper.io/

* feat(request-signer.js): Allow signature custom error messages (FineUploader#1724)

Update S3 request signer to use `error` property on response if set.
Includes docs + tests.

* chore(package.json): upgrade to clean-css 4.x
closes FineUploader#1732

* chore(version.js): forgot to update all files w/ new version
closes FineUploader#1732

* chore(package.json): update karma to version 1.4.1 (FineUploader#1736)

https://greenkeeper.io/

* feat(uploader.basic.api.js): removeFileRef method (FineUploader#1737)

When called, this deleted the reference to the Blob/File (along with all other file state tracked by the upload handler).
closes FineUploader#1711

* docs(methods.jmd): document removeFileRef method
closes FineUploader#1711

* feat(uploader.basic.api.js): intial setStatus() API method implementation
Initially, only qq.status.DELETED and qq.status.DELETE_FAILED are supported. All other statuses will throw. This can be used to mark a file as deleted, or to indicate that a delete attempt failed if you are using delete file logic outside of Fine Uploader's control. This will update the UI by removing the file if you are using Fine Uploader UI as well.
closes FineUploader#1738

* chore(build): 5.14.0-beta2
FineUploader#1738

* docs(methods.jmd): Mention the only statuses that are valid ATM
closes FineUploader#1739
[skip ci]

* docs(methods.jmd): invalid character in setStatus signature
FineUploader#1739
[skip ci]

* chore(package): update clean-css-cli to version 4.0.5 (FineUploader#1746)

Closes FineUploader#1745

https://greenkeeper.io/

* fix(Makefile): npm path not properly set for cygwin (FineUploader#1698)

Detect npm-path properly in cygwin (fixes windows build). Looks for '_NT' to detect if we are on cygwin or not.

* chore(package): update clean-css-cli to version 4.0.6 (FineUploader#1749)

https://greenkeeper.io/

* feat(fine-uploader.d.ts): initial Typescript definitions (FineUploader#1719)

This includes: 
* Typescript definition file that covers the entire API.
* Updated Makefile to include typescript directory in build output.
* typescript/fine-uploader.test.ts.
* Various documentation fixes.

* chore(build): prepare for 5.14.0-beta3 release
[skip ci]

* Improve issue templates + move support back to issue tracker (FineUploader#1754)

* chore(package): update clean-css-cli to version 4.0.7 (FineUploader#1753)

https://greenkeeper.io/

* chore(build): prepare for 5.14.0 release

* docs(amazon-s3.jmd): missing linefeeds in v4 signing steps

[skip ci]

* docs(amazon-s3.jmd): 2nd attempt at fixing nested list in v4 sig section

[skip ci]

* Add missing return definition

My env was complaining about implicit any.

* docs(navbar.html): prepare for switch to SSL
 [skip ci]

* docs(navbar.html): SSL not enabled yet for fineuploader.com
CloudFlare will redirect to HTTPS once it's ready anyway.
 [skip ci]

* docs(navbar.html): prepare for switch to SSL
 [skip ci]

* docs(async...jmd): typo in shitty home-grown promise impl docs
 [skip ci]

* chore(package): update karma to version 1.5.0 (FineUploader#1762)

https://greenkeeper.io/

* chore(build): generate docs to docs repo on Travis (FineUploader#1769)

This will eventually replace the dedicated Dreamhost server.

* chore(Makefile): ensure root of docs mirrors /branch/master
FineUploader#1770

* chore(Makefile): ensure root of docs mirrors /branch/master
FineUploader#1770

* chore(package): update clean-css-cli to version 4.0.8 (FineUploader#1771)

https://greenkeeper.io/

* chore(build): prepare for 5.14.1 release
FineUploader#1759

* chore(package): update karma-spec-reporter to version 0.0.27 (FineUploader#1773)

https://greenkeeper.io/

* chore(package): update karma-spec-reporter to version 0.0.29 (FineUploader#1774)

https://greenkeeper.io/

* chore(package): update karma-spec-reporter to version 0.0.30 (FineUploader#1775)

https://greenkeeper.io/

* feat(docs/navbar): easy access to docs for specific version

FineUploader#1770

* docs(main.css): tag-chooser mis-aligned on mobile

* chore(Makefile): use "released" version of FineUploader/docfu
FineUploader#1770

* chore(Makefile): use "released" version of FineUploader/docfu (1.0.2)
FineUploader#1770

* chore(package): update uglify-js to version 2.8.0 (FineUploader#1780)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.1 (FineUploader#1781)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.2 (FineUploader#1782)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.3 (FineUploader#1783)

https://greenkeeper.io/

* fix(uploader.basic.api.js): onStatusChange called too early

onStatusChange is called for initial/canned files before internal state for the file is completely updated. This change introduces an update that makes it easy for internal users of the upload-data service to defer the status change broadcast until all processing and state updates are complete.
fixes FineUploader#1802
fixes FineUploader/react-fine-uploader#91

* test(package): lock pica to v2.0.8 (FineUploader#1818)

Pica recently released v3.0.0, with a breaking change which our tests rely on. This
commit locks pica to the last stable version in order to make the test suite pass again.

* docs(LICENSE): clarify copyright history

[skip ci]

* docs(LICENSE): clarify copyright history

[skip ci]

* docs(README): + cdnjs badge

[skip ci]

* Update TypeScript definitions S3/Azure properties with default values to be optional (FineUploader#1830)

Change from required to optional for certain properties that FineUploader automatically provides defaults

* Added Open Collective to README.md and in npm postinstall (FineUploader#1795)

* Have to replace docs token after Travis' major security fuckup

* Work around Travis-CI key issue, sigh

* prepare for 5.14.3 release

* minor spelling/method name correction (FineUploader#1851)

* not using open collective anymore

[skip ci]

* not using open collective anymore

* prepare for 5.14.4 release

* Fix extraButtons typescript define not correct (FineUploader#1850)

fix in TS definitions where extraButtons option wasn't allowing to pass an array of ExtraButtonsOptions

* fix(azure): multi-part upload to S3 fails on Edge >= 15

fixes FineUploader#1852

* chore(build): prepare for 5.14.5 release
FineUploader#1852
FineUploader#1859

* docs(README.md): new maintainer notice

[skip ci]

* Updated Typescript to support canonical imports (FineUploader#1840)

* Updated Typescript to support canonical imports

* UI Options now extend core options

* constructor removed from interfaces that are function types, some parameters that provide default values changed to optional

Reverting some of the function type interface declarations to what they
were before to not use constructors. Some typo fixes

* Extra Buttons type fixed, code formatting

* Test file updated to demonstrate proper syntax usage

* TypeScript added to docs

Update to the documentation highlighting proper TypeScript usage
according to changes from this PR

* Adding abnormally missed text in previous commit

* Updated version number for next relaese (FineUploader#1891)

* no longer accepting support requests

* Fixes FineUploader#1930 documentation issue (FineUploader#1932)

* Fixes FineUploader#1930: fix documentation errors

Adds missing `document.` before `getElementById` calls.

* Adds missing `document.` to `getElementById` calls

* Remove Widen as sponsor

* feat(S3): Allow serverside encryption headers as request params (FineUploader#1933)

fixes FineUploader#1803

* fix(templating): correctly handle tables (FineUploader#1903)

When adding new rows to a table, the existing mechanism of storing the HTML of
a row in variable breaks at least on firefox: When parsing this HTML fragment
and creating DOM elements, the browser will ignore tags it does not expect
without proper parent tags. Appending this modified DOM branch to the table
results in broken DOM structure. Cloning the DOM branch of a row and appending
this clone to the table works just fine.

fixes FineUploader#1246

* prepare to release 5.15.1

* fix(Makefile): wrong case of `client/commonJs` (FineUploader#1828)

* fix(dnd): ignore non-file drag'n'drop events (FineUploader#1819)

Since we only deal with files, it makes sense to ignore all events non-file related (eg. dragging
plaintext). This commit fixes a few things that have changed in the browsers which subtly break
the current checks.

* The `contains` function on `dt.files` has been removed from the spec and will always return
  undefined. Except for IE, which hasn't implemented the change.
  * Chrome and Firefox have replaced it with `includes`, which we now use
  * We've left a `contains` check in there for IE as a last resort
  * Remove the comment about it being Firefox only, since it also works in Chrome now
  * More info re: removal at: https://github.com/tc39/Array.prototype.includes#status

* The dt.files property always seems to be an empty array for non-drop events. Empty arrays are
  truthy, and so this will always satisfy the `isValidFileDrag` check before it can validate that
  the types array includes files
  * It will now only be truthy if the files array actually contains entries

* There is a drop handler which binds to the document and always prevents all default drop
  behaviour from occurring, including things like dropping text into textfields
  * It will now only prevent default behaviour for file drops, which has the handy side-effect
    of preventing the page from navigating to the dropped file if the user misses the dropzone.

Fixes FineUploader#1588

* prepare for 5.15.2 release

* prepare for 5.15.3 release

* fix(dnd.js): Firefox drag area flickers (FineUploader#1946)

Removes Firefox check in leavingDocumentOut.
fixes FineUploader#1862

* prepare for 5.15.4 release

* bloburi param in doc now matches code (FineUploader#1950)

Minor edit to get the docs to match the code on the SAS request params.

* fix(templating.js): reset caused duplicate template contents (FineUploader#1958)

fixes FineUploader#1945

* prepare for 5.15.5 release

* fix(uploader.basic.api.js): auto-retry count not reset on success (FineUploader#1964)

fixes FineUploader#1172

* more maintainers

[skip ci]

* fix(dnd.js): qqpath wrong if file name occurs in parent dir (FineUploader#1977)

fixes FineUploader#1976

* feat(uploader.basic.js): more flexible server endpoint support (FineUploader#1939)

* Local dev/testing ports 3000/3001 clash with my local env, and possibly others - moving to 4000/4001.

* returned onUploadChunk promise can override method, params, headers, & url
* promissory onUpload callback

* always ensure test server are killed either on test start or stop

* don't try to kill test server on CI before tests start

* option to allow upload responses without { "success": true }

* allow default params to be omitted from upload requests

* don't fail upload w/ non-JSON response when requireSuccessJson = false

* more flexible chunking.success request support

* add .editorconfig (can't believe this didn't exist until now)

* Allow custom resume keys and data to be specified.

* include customResumeData in return value of getResumableFilesData API method

* add isResumable public API method

* introduce chunking.success.resetOnStatus to allow FU to reset a file based on chunking.success response code

* new API method: isResumable(id)

* Allow onUpload resolved Promise to pause the file.
Use case: When onUpload is called, you make a request to your server to see if the file already exists. If it does, you want to let your user decide if they want to overwrite the file, or cancel the upload entirely. While waiting for user input you don't want to hold a spot in the upload queue. If the user decided to overwrite the file, call the `continueUpload` API method.

* Allow per-file chunk sizes to be specified.
chunking.partSize now accepts a function, which passes the file ID and size

* feat(beforeUnload): new option to turn off beforeUnload alert during uploads

* feat(features.js): auto-detect folder support

* Allow access to Blob when file status is still SUBMITTING

* docs: options, API, and events doc updates

* added qq.status.UPLOAD_FINALIZING - don't cancel or pause in this state

closes FineUploader#848
closes FineUploader#1697
closes FineUploader#1755
closes FineUploader#1325
closes FineUploader#1647
closes FineUploader#1703

* fix(various): misc areas where null values may cause problems (FineUploader#1995)

* fix(upload.handler.controller): missing null-check in send()

* docs: fix Amazon S3 v4 signature guide (FineUploader#1998)

* docs: fix Amazon S3 v4 signature guide

* docs: s3 v4 signature

* docs(s3): fix chunked upload v4 (FineUploader#2001)

* Clarified that callbacks are FineUploader option. (FineUploader#2014)

* Call target.onload() only when defined (FineUploader#2056)

This removes the "Uncaught TypeError: target.onload is not a function" console error while img preview

* consider Firefox when checking for Android Stock Browser (FineUploader#1978) (FineUploader#2007)

* feat(dnd.js): add dragEnter and dragLeave callbacks (FineUploader#2037)

* feat(dnd.js): add dragEnter and dragLeave callbacks

* add dragEnter/dragLeave doc

* fix(Makefile): smart_strong import error on docfu install (FineUploader#2068)

Fixed in docfu 1.0.4, which locks us to python-markdown 2.6.11 (the last version to include smart_strong).
https://github.com/FineUploader/docfu/releases/tag/1.0.4

* fix logo

* not looking for new maintainers anymore

* bye bye fine uploader!
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests