Skip to content
This repository has been archived by the owner on Nov 20, 2018. It is now read-only.

retry count does not reset with chunked uploads to S3 #1172

Closed
simshaun opened this issue Mar 25, 2014 · 26 comments · Fixed by #1964
Closed

retry count does not reset with chunked uploads to S3 #1172

simshaun opened this issue Mar 25, 2014 · 26 comments · Fixed by #1964

Comments

@simshaun
Copy link

We are debugging an issue our client is seeing in Firefox with the S3 uploader where it sometimes fails with

[Fine Uploader 4.3.1] Received response status 400 with body: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message>

The uploader has chunking enabled because the files being uploaded are typically 2 to 6 GB.

I have no idea as to the cause of that error (ideas?), but with the retry feature enabled, the failed chunk seems to successfully upload after 1 or 2 retries. The problem is the retry counter doesnt appear to reset to 0 if the chunk does upload successfully.

As it stands, it seems we need to set an arbitrary large number for maxAutoAttempts to accommodate a large number of retries on these files that are chunked in to over 1,000 pieces.

@rnicholus
Copy link
Member

It sounds like you are focusing on the wrong problem here. I'm hesitant to make any changes to account for fundamental server or network flaws. If uploads are frequently failing, you might want to look into the root cause. If it takes a large number of retries to successfully upload a single large file, surely your users must be inconvenienced already.

@simshaun
Copy link
Author

Yes, we are still investigating the root cause. Still, I think the retry count should be reset after each chunk succeeds. Perhaps this is a feature request more than a bug report.


Regarding the root cause, we just tested uploading nine separate 6GB files in Chrome and it didn't trigger that error even once. Firefox, however, is exhibiting the error quite frequently.

I'm inclined to think this is an issue with Firefox, but I don't yet know how to tell if it's the browser itself or an issue with the code.

Do you think there may be any part of FineUploader, chunking maybe, that could be leading to this problem only in Firefox?


Additional clues:

These errors begin occurring frequently when more than once file is being uploaded at once (multiple browser tabs, each uploading a file).

@rnicholus
Copy link
Member

We haven't had any other reports of such an issue. What version of Firefox, and do you have a set of steps to reliably reproduce?

@simshaun
Copy link
Author

I've reproduced this error 5/5 times on two different PCs under two different ISPs/networks. Mine at our office, and through a VPN to a PC at our client's office.

  • Both PCs are running Win 7/Firefox 28.
  • For consistency, we are attempting to upload the same 1 GB file on both PCs.

To reproduce, we:

  1. Open an S3 upload page 3 times (3 different tabs). Chunked uploads enabled, Retry enabled (10 times max)
  2. Choose the 1 GB file on each page so that all 3 tabs are uploading at the same time.
  3. Wait. We sometimes begin seeing retries near the beginning of the upload, sometimes it takes a few minutes. It always happens eventually.

Upload speed does not appear to have any effect. Our office connection is a meager 5 MB/s upload, but it errors. Our client's upload speed is pushing 800 MB/s, it errors as well.

@rnicholus
Copy link
Member

What version of Fine Uploader?

On Tuesday, March 25, 2014, Shaun [email protected] wrote:

I've reproduced this error 5/5 times on two different PCs under two
different ISPs/networks. Mine at our office, and through a VPN to a PC at
our client's office.

  • Both PCs are running Win 7/Firefox 28.
  • For consistency, we are attempting to upload the same 1 GB file on
    both PCs.

To reproduce, we:

  1. Open an S3 upload page 3 times (3 different tabs). Chunked uploads
    enabled, Retry enabled (10 times max)
  2. Choose the 1 GB file on each page so that all 3 tabs are uploading
    at the same time.
  3. Wait. We sometimes begin seeing retries near the beginning of the
    upload, sometimes it takes a few minutes. It always happens
    eventually.

Upload speed does not appear to have any effect. Our office connection is
a meager 5 MB/s upload, but it errors. Our client's upload speed is pushing
800 MB/s, it errors as well.

Reply to this email directly or view it on GitHubhttps://github.com//issues/1172#issuecomment-38640517
.

@simshaun
Copy link
Author

FineUploader 4.3.1

@rnicholus
Copy link
Member

Thanks. We will look into this further tomorrow. This case will be
updated with progress and new info as it becomes available.

On Tuesday, March 25, 2014, Shaun [email protected] wrote:

FineUploader 4.3.1

Reply to this email directly or view it on GitHubhttps://github.com//issues/1172#issuecomment-38642078
.

@simshaun
Copy link
Author

We are receiving the error intermittently today just trying to upload a single file.

So, I've been researching the 400 RequestTimeout problem, and it seems we're definitely not alone in having this issue.

aws/aws-cli#454
Solution: They just retry upon receiving 400 RequestTimeout.

aws/aws-sdk-php#29
Solution: They just retry upon receiving 400 RequestTimeout.
Additional clue: If you continue to see this issue, please ensure that you are not sending an incorrect Content-Length header in your requests.

https://groups.google.com/forum/#!msg/jets3t-users/_jr_8VXzSWU/3EWPjwrUoaYJ
Additional clues: S3 is pretty unforgiving about pauses during an upload and will return
an error within a few seconds. But the most likely problem is simply an incorrect Content-Length value.

http://www.plupload.com/punbb/viewtopic.php?id=3909
No solutions offered, but does give a little insight in to potential problems with Firefox.

@rnicholus
Copy link
Member

Note that Fine Uploader doesn't set the Content-Length header, the browser
does, so that's not likely part of the problem here.

On Wed, Mar 26, 2014 at 12:01 PM, Shaun [email protected] wrote:

We are receiving the error intermittently today just trying to upload a
single file.

So, I've been researching the 400 RequestTimeout problem, and it seems
we're definitely not alone in having this issue.

aws/aws-cli#454 aws/aws-cli#454

Solution: They just retry upon receiving 400 RequestTimeout.

aws/aws-sdk-php#29 aws/aws-sdk-php#29
Solution: They just retry upon receiving 400 RequestTimeout.
Additional clue: If you continue to see this issue, please ensure that
you are not sending an incorrect Content-Length header in your requests.

https://groups.google.com/forum/#!msg/jets3t-users/_jr_8VXzSWU/3EWPjwrUoaYJ
Additional clues: S3 is pretty unforgiving about pauses during an
upload and will return
an error within a few seconds. But the most likely problem is simply an
incorrect Content-Length value.

http://www.plupload.com/punbb/viewtopic.php?id=3909
No solutions offered, but does give a little insight in to potential
problems with Firefox.

Reply to this email directly or view it on GitHubhttps://github.com//issues/1172#issuecomment-38710189
.

@rnicholus
Copy link
Member

I just read the plupload thread, and didn't find anything useful in there. It's not clear how Firefox is involved. Do you have firebug open, or any other extension installed? I've seen extensions and OS antivirus interfere with browser HTTP traffic on numerous occasions.

@simshaun
Copy link
Author

The install of Firefox we are testing on at our client's office is a vanilla install. The only extension installed on it is Firebug.

@rnicholus
Copy link
Member

Firebug has been a source of constant pain for me. I've found builds of it to be quite buggy and generally avoid using it. Is firebug open when you reproduce these issues?

@simshaun
Copy link
Author

That is fairly interesting. We disabled Firebug and are utilizing Firefox's built-in developer tools now.

Perhaps it's coincidental, but whereas we were getting roughly 20-30 errors per upload (uploading 3 files at once) before, we only had at max 2 errors on the same files with Firefox disabled. We're still testing with additional batches of uploads to find out if this remains true, and I'll update when we find out more.

@simshaun
Copy link
Author

I think you've hit the nail on the head about Firebug.

We've uploaded 4 batches of 6 GB files (3 uploads at a time in each batch) with Firebug disabled. The first one is described in the comment above. The next 3 batches all completed without error.

Tried a 5th batch with Firebug re-enabled and the errors began appearing again. Waited until each upload hit 30 errors at only ~20% through and disabled Firebug again. Uploads quickly finished successfully without another error. This is definitely not a FineUploader issue as far as I'm concerned.


We are receiving errors in IE9 as well. Log from IE9 console:

LOG: [Fine Uploader 4.3.1] Received 1 files or inputs. 
LOG: [Fine Uploader 4.3.1] onSubmit - waiting for onSubmit promise to be fulfilled for 0 
LOG: [Fine Uploader 4.3.1] onSubmit promise success for 0 
LOG: [Fine Uploader 4.3.1] Submitting S3 signature request for 0 
LOG: [Fine Uploader 4.3.1] Sending POST request for 0 
LOG: [Fine Uploader 4.3.1] Sending upload request for 0 
LOG: [Fine Uploader 4.3.1] Received response for 0_17574c5e-5bee-4e8c-a8f3-8070f9281507 
[Fine Uploader 4.3.1] Error when attempting to access iframe during handling of upload response (Access is denied.)
LOG: [Fine Uploader 4.3.1] iframe loaded 
[Fine Uploader 4.3.1] Amazon likely rejected the upload request 

Using Fiddler2 to inspect the request, we see the response from Amazon:

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>MalformedPOSTRequest</Code><Message>The body of your POST request is not well-formed multipart/form-data.</Message><RequestId>5363D49F5B3E1150</RequestId><HostId>tyUziamtbIKB30oMbY+AVDeaDg4lL5vkr+r5AGeanX0fme/rWbUJhXRiwGI9lAHI</HostId></Error>

IE9 succesfully uploaded a 1GB file then 2x 1GB files at once. It fails consistently with a 4.5GB file. Should I open a new issue for this?

@rnicholus
Copy link
Member

No, this is not something we can overcome. Every browser has some sort of a
maximum request size. A 4.5 GB file is likely exceeding that max, perhaps
resulting in a truncated request, which is causing AWS to complain. We can
easily get around this max request size in modern browsers via chunking, but
chunking is not possible in IE9 and older.

On Wed, Mar 26, 2014 at 5:13 PM, Shaun [email protected] wrote:

I think you've hit the nail on the head about Firebug.

We've uploaded 4 batches of 6 GB files (3 uploads at a time in each batch)
with Firebug disabled. The first one is described in the comment above. The
next 3 batches all completed without error.

Tried a 5th batch with Firebug re-enabled and the errors began appearing
again. Waited until each upload hit 30 errors at only ~20% through and
disabled Firebug again. Uploads quickly finished successfully without

another error.

This is definitely not a FineUploader issue as far as I'm concerned.
However, we are receiving errors in IE9 as well. Log from IE9 console:

LOG: [Fine Uploader 4.3.1] Received 1 files or inputs.
LOG: [Fine Uploader 4.3.1] onSubmit - waiting for onSubmit promise to be fulfilled for 0
LOG: [Fine Uploader 4.3.1] onSubmit promise success for 0
LOG: [Fine Uploader 4.3.1] Submitting S3 signature request for 0
LOG: [Fine Uploader 4.3.1] Sending POST request for 0
LOG: [Fine Uploader 4.3.1] Sending upload request for 0
LOG: [Fine Uploader 4.3.1] Received response for 0_17574c5e-5bee-4e8c-a8f3-8070f9281507
[Fine Uploader 4.3.1] Error when attempting to access iframe during handling of upload response (Access is denied.)
LOG: [Fine Uploader 4.3.1] iframe loaded
[Fine Uploader 4.3.1] Amazon likely rejected the upload request

Using Fiddler2 to inspect the request, we see the response from Amazon:

MalformedPOSTRequestThe body of your POST request is not well-formed multipart/form-data.5363D49F5B3E1150tyUziamtbIKB30oMbY+AVDeaDg4lL5vkr+r5AGeanX0fme/rWbUJhXRiwGI9lAHI

IE9 succesfully uploaded a 1GB file then 2x 1GB files at once. It fails
consistently with a 4.5GB file. Should I open a new issue for this?

Reply to this email directly or view it on GitHubhttps://github.com//issues/1172#issuecomment-38747051
.

@simshaun
Copy link
Author

Thanks for sticking with me through this.

You are correct. It appears IE9's upload limit is 4 GB per http://blogs.msdn.com/b/ieinternals/archive/2011/03/10/wininet-internet-explorer-file-download-and-upload-maximum-size-limits.aspx

I think information like that would useful in the FineUploader docs, even though it's not really your responsibility. Would you consider it?

@rnicholus
Copy link
Member

Yep, I'd have to think about where to put it. Can you open up a case so I remember to take care of this?

I guess we can close this one as well?

@EZWrighter
Copy link

Can this be re-opened to actually take care of the original posters issue? Retry count should reset upon success. I would say that is pretty standard upload behavior that I have experienced in the past for uploading. There could be all kinds of reasons for network failure, but once the failures have stopped and you get a successful chunk or file uploaded, the old retry count is irrelevant. You could even add it as a config option but I doubt anybody really wants continuous retry count.

@rnicholus
Copy link
Member

Sure. I'll reopen this so we can discuss more internally.

@rnicholus
Copy link
Member

This will be part of a future release.

@rnicholus rnicholus removed this from the 5.1 milestone Sep 24, 2014
@notken
Copy link

notken commented Oct 6, 2015

Was this ever put in to the codebase and I've just not found the setting? I'm putting in deliberate random failures to test, with a retry count of 3. When it gives up, if I click Retry it only retries once before giving up again. The retry count does need resetting to zero after either a successful chunk, or at the very least after a manual retry. I'm trying version 5.3.2

(Sorry, I should point out I'm not using S3. Just a standard upload to server.)

@rnicholus
Copy link
Member

This has not made it into a scheduled release yet. When it does, the case will be updated appropriately.

@rnicholus
Copy link
Member

By the way, while I do agree that the retry count should be reset after a successful chunk upload, I do not agree that this should be reset on a manual retry.

@notken
Copy link

notken commented Oct 7, 2015

I guess because it's not resetting after a successful chunk, it not resetting when you manually retry seems illogical. It just seems like it's totally ignoring the settings.

When testing it with built-in random failures, if I click Retry I'd expect it to suffer 3 more random failures before stopping again. But in actuality, the moment it's successfully done one after a failure I'd be quite happy for it to keep recovering from occasional failure until it's completed, and only fail if it really couldn't get through at all.

It just seems odd to ask for an auto-retry count in settings, and then ignore it and go for a developer-decided number when you manually Retry. Maybe it all needs to be in the settings.

@rnicholus
Copy link
Member

This will be fixed simply by resetting the count once a chunk has
successfully uploaded.
On Wed, Oct 7, 2015 at 5:40 AM notken [email protected] wrote:

I guess because it's not resetting after a successful chunk, it not
resetting when you manually retry seems illogical. It just seems like it's
totally ignoring the settings.

When testing it with built-in random failures, if I click Retry I'd expect
it to suffer 3 more random failures before stopping again. But in
actuality, the moment it's successfully done one after a failure I'd be
quite happy for it to keep recovering from occasional failure until it's
completed, and only fail if it really couldn't get through at all.

It just seems odd to ask for an auto-retry count in settings, and then
ignore it and go for a developer-decided number when you manually Retry.
Maybe it all needs to be in the settings.


Reply to this email directly or view it on GitHub
#1172 (comment)
.

@ShaharTal
Copy link

I am getting this error as well, not just with Firefox, mainly Chrome actually. Only with large files, directly to S3, what info can I give to better help/investigate?

fragilbert added a commit to fragilbert/file-uploader that referenced this issue Aug 10, 2019
* refactor(concurrent chunking): too much logging
FineUploader#1519

* fix(concurrent chunking): account for cancelled chunk uploads that are still waiting for signature
This should prevent a chunk upload that has not yet called xhr.send() from starting if it has been cancelled by error handling code.
FineUploader#1519

* docs(concurrent chunking): prepare for 5.5.1 release
Removed temporary logs.
fixes FineUploader#1519

* docs(issues and PRs): first set of issue/PR templates
[skip ci]

* fix(s3): form uploader may not verify upload w/ correct bucket name
fixes FineUploader#1530

* docs(delete files): clarify `forceConfirm` option.
fixes FineUploader#1522

* docs(traditional): broken link to chunking feature page

* chore(release): prepare for 5.5.2 release

* docs(options): session option typo

[skip ci]

* feat(initial files): Allow initial files to be added via the API
Introduces a new API method - addInitialFiles.
closes FineUploader#1191

* docs(initial files): Update initial files feature page
closes FineUploader#1191
[skip ci]

* feat(button.js): Allow <input type="file"> title attr to be specified
This also account for extraButtons.
closes FineUploader#1526

* docs(options): typos
[skip ci]
FineUploader#1526

* docs(README): semver badge link is broken, not needed anymore anyway
[skip ci]

* chore(build): prepare for 5.6.0 release
[skip ci]

* chore(build): trivial change to re-run travis build

* docs(delete file): clearer docs for proper callback use

* chore(build): remove PR branch check - just gets in the way now

* chore(php): update local testing env to latest Fine Uploader PHP servers

* feat(Edge): DnD support for Microsoft Edge
Also adjusted an unreliable test. This was tested in Edge 13.10586.
FineUploader#1422

* docs(Edge): List Edge 13.10586+ as a supported browser
FineUploader#1422

* chore(build): Mark build 1 of 5.7.0
FineUploader#1422

* chore(build): prepare for 5.7.0 release
FineUploader#1422

* Pass xhr object through error handlers

* chore(release): prepare for 5.7.1. release
FineUploader#1599

* chore(release): build release branches too

* docs(contributing): attempt to re-integrate clahub.com

* feat(commonjs): CommonJS + AMD support
FineUploader#789

* feat(CryptoJS, ExifRestorer): Move 3rd-party deps to qq namespace
FineUploader#789

* refactor(concat.js): cleanup banner/footer for bundles
FineUploader#789

* fix(concat.js): don't add modules JS to css files
Also removed bad package.json main property.
FineUploader#789

* fix(concat.js): lint errors
FineUploader#789

* feat(CommonJS): more elegant importing of JS/CSS
FineUploader#789

* chore(build): prepare to publish 5.8.0-1 pre-release
FineUploader#789

* chore(build): prepare to publish 5.8.0-beta1 pre-release
FineUploader#789

* docs(README): gitter chat shield

* feat(build): better name for modern row-layout stylesheet
Only used in new lib directory.
FineUploader#1562

* docs(modules): Feature page and links to feature in various places
FineUploader#1562

* docs(version): Prepare for 5.8.0 release
FineUploader#1562

* fix(build): node version is too old to run updated build

* docs(features): Link to modules feature page.
FineUploader#1562

* fix(build): we are tied to node 0.10.33 ATM
FineUploader#1562

* chore(MIT): start of switch to MIT license
FineUploader#1568

* chore(MIT): better build status badge
FineUploader#1568

* docs(README): horizontal badges

FineUploader#1568

* status(README): license and SO badges
FineUploader#1568

* docs(README): fix license badge

FineUploader#1568

* chore(build): update license on banner
FineUploader#1568

* docs(README): add contributing section
FineUploader#1568

* chore(build): install grunt-cli
FineUploader#1568

* chore(git): ignore iws files
FineUploader#1568

* docs(index): update index page & footer
FineUploader#1568

* docs(support): simplify menu
FineUploader#1568

* docs(README): more info on contributing

* docs(README): grammar

* fix(spelling): various typos in tests, comments, docs, & code 

FineUploader#1575

* chore(build): start of 5.10.0 work

* feat(scaling): Allow an alternate library to be used to generate resized images

FineUploader#1525

* docs(scaling & thumbnails): 3rd-party scaling doc updates (FineUploader#1586)

FineUploader#1576

* chore(build): prepare for 5.10.0 release

* fix(session): Session requester ignores cors option (FineUploader#1598)

* chore(build): start of 5.11.0 changes
FineUploader#1598

* docs(events.jmd): typo

[skip ci]

* docs(delete.jmd): add missing backtic
[skip ci]

* docs(delete.jmd): add missing backtic
[skip ci]

* fix(image): Fixed a problem where image previews weren't being loaded (FineUploader#1610)

correctly if there was a query string in the URL

* docs(README): fix Stack Overflow badge

[skip ci]

* docs(README): fix Stack Overflow badge

[skip ci]

* chore(build): prepare for 5.10.1 release
[skip ci]

* docs(features.jmd): Document S3 Transfer Acceleration to S3 feature (FineUploader#1627)

Also removed mention of CF craziness. 
closes FineUploader#1556 
closes FineUploader#1016

* feat(build) drop grunt, core & dnd builds (FineUploader#1633)

closes FineUploader#1569 
closes FineUploader#1605 
close FineUploader#1581 
closes FineUploader#1607

* Revert "FineUploader#1569 build cleanup" (FineUploader#1634)

* feat(build) drop grunt, core & dnd builds (FineUploader#1635)

closes FineUploader#1569 
closes FineUploader#1605 
closes FineUploader#1581 
closes FineUploader#1607

* docs(README.md): better build instructions

FineUploader#1569

* refactor(Makefile): caps to lower-case
FineUploader#1569

* fix(Makefile): bad syntax in publish recipe
FineUploader#1569

* feat(Makefile): more comprehensive publish recipe
FineUploader#1569

* fix(CommonJS): missing core aliases
fixes FineUploader#1636

* fix(CommonJS): traditional should be default
fixes FineUploader#1636

* docs(modules.jmd): mention core builds, fix script paths
fixes FineUploader#1636

* docs(modules.jmd): more script path fixes
fixes FineUploader#1636

* fix(lib/core): wrong path for core module `require` statements
fixes FineUploader#1637

* chore(Makefile): allow publish simulation
`make publish simulation=true`

* fix(Makefile): traditional endpoint jquery js files missing
fixes FineUploader#1639

* fix(Makefile): traditional endpoint jquery js files missing from zip
fixes FineUploader#1639

* docs(README.md): better quality logo

[skip ci]

* docs(README.md): remove freenode chat badge

[skip ci]

* fix(Makefile): jQuery S3 & Azure are missing plug-in aliases
fixes FineUploader#1643

* fix(Makefile): jQuery S3 & Azure are missing plug-in aliases
fixes FineUploader#1643

* feat(getting started): better getting started guide (FineUploader#1651)

FineUploader#1646

* docs(README.md): update changelog link

[skip ci]

* docs(README.md): update changelog link

[skip ci]

* docs(README.md): remove freenode chat badge

[skip ci]

* fix(Makefile): uploader doesn't load in IE8/9
fixes FineUploader#1653

* fix(azure/uploader.basic): customHeaders omitted from delete SAS

closes FineUploader#1661

* chore(build): prepare for 5.11.8 release
FineUploader#1661

* fix(s3-v4) Invalid v4 signature w/ chunked non-ASCII key (FineUploader#1632)

closes FineUploader#1630

* chore(build): start of 5.11.9 release work
FineUploader#1632

* chore(Makefile): make it easier to start local test server

* fix(request.signer.js): Client-side signing errors don't reject promise (FineUploader#1666)

This is yet another instance that details why I would like to rip out `qq.Promise` and instead depend on native `Promise` (or require a polyfill).

* Update docs for retry feature (FineUploader#1675)

event onRetry => onAutoRetry

* docs(getting started): page 1 code example typos (FineUploader#1677)

closes FineUploader#1676

* docs(getting started): page 1 code example typos (FineUploader#1677)

closes FineUploader#1676

* docs(forms.jmd): Minor edit to fix invalid example code (FineUploader#1679)

[skip ci]

* docs(options-s3.jmd): Remove console.log on S3 Options page (FineUploader#1681)

[skip ci]

* fix(upload.handler.controller): deleted file doesn't fail fast enough
Now, if a zero-sized chunk is detected (which happens if the file is deleted or no longer available during the upload), the upload will be marked as failing.
fixes FineUploader#1669

* fix(uploader.basic.api.js): file marked as retrying too late
Should happen before the wait period, not after.
fixes FineUploader#1670

* docs(statistics-and-status-updates.jmd): update retrying status def
FineUploader#1670
[skip ci]

* docs(options-ui.jmd): remove duplicate option
FineUploader#1689
[skip ci]

* docs(options-azure.jmd): typo
FineUploader#1689
[skip ci]

* docs(qq.jmd): invalid docs for qq.each iterable param
FineUploader#1689
[skip ci]

* docs(02-setting_options-s3.jmd): Add comma to object literal (FineUploader#1694)

(now the snippet is valid JavaScript)

* fix(Makefile): identify.js included twice (FineUploader#1691)

This does not appear to cause any issues, but it does inflate the size of all built JS files a bit.

* fix(Makefile): $.fineUploaderDnd missing from jQuery builds
fixes FineUploader#1700

* chore(build): field testing for 5.11.10 before release
FineUploader#1691
FineUploader#1700

* chore(build): release 5.11.10
FineUploader#1691
FineUploader#1700

* docs(README.md): add twitter shield

[skip ci]

* Update dependencies to enable Greenkeeper 🌴 (FineUploader#1706)

* docs(README.md): add twitter shield

[skip ci]

* chore(package): update dependencies

https://greenkeeper.io/

* chore(package.json): start of v5.12.0

[skip ci]

* chore(version.js): start of v5.12.0

* feat(validation): Allow upload with empty file (FineUploader#1710)

Don't reject an empty file if `validation.allowEmpty` is set to `true`.
closes FineUploader#903 
closes FineUploader#1673

* chore(Makefile): test servers may not start without changes

* Update karma to the latest version 🚀 (FineUploader#1721)

* chore(package): update clean-css to version 3.4.24 (FineUploader#1723)

https://greenkeeper.io/

* feat(request-signer.js): Allow signature custom error messages (FineUploader#1724)

Update S3 request signer to use `error` property on response if set.
Includes docs + tests.

* chore(package.json): upgrade to clean-css 4.x
closes FineUploader#1732

* chore(version.js): forgot to update all files w/ new version
closes FineUploader#1732

* chore(package.json): update karma to version 1.4.1 (FineUploader#1736)

https://greenkeeper.io/

* feat(uploader.basic.api.js): removeFileRef method (FineUploader#1737)

When called, this deleted the reference to the Blob/File (along with all other file state tracked by the upload handler).
closes FineUploader#1711

* docs(methods.jmd): document removeFileRef method
closes FineUploader#1711

* feat(uploader.basic.api.js): intial setStatus() API method implementation
Initially, only qq.status.DELETED and qq.status.DELETE_FAILED are supported. All other statuses will throw. This can be used to mark a file as deleted, or to indicate that a delete attempt failed if you are using delete file logic outside of Fine Uploader's control. This will update the UI by removing the file if you are using Fine Uploader UI as well.
closes FineUploader#1738

* chore(build): 5.14.0-beta2
FineUploader#1738

* docs(methods.jmd): Mention the only statuses that are valid ATM
closes FineUploader#1739
[skip ci]

* docs(methods.jmd): invalid character in setStatus signature
FineUploader#1739
[skip ci]

* chore(package): update clean-css-cli to version 4.0.5 (FineUploader#1746)

Closes FineUploader#1745

https://greenkeeper.io/

* fix(Makefile): npm path not properly set for cygwin (FineUploader#1698)

Detect npm-path properly in cygwin (fixes windows build). Looks for '_NT' to detect if we are on cygwin or not.

* chore(package): update clean-css-cli to version 4.0.6 (FineUploader#1749)

https://greenkeeper.io/

* feat(fine-uploader.d.ts): initial Typescript definitions (FineUploader#1719)

This includes: 
* Typescript definition file that covers the entire API.
* Updated Makefile to include typescript directory in build output.
* typescript/fine-uploader.test.ts.
* Various documentation fixes.

* chore(build): prepare for 5.14.0-beta3 release
[skip ci]

* Improve issue templates + move support back to issue tracker (FineUploader#1754)

* chore(package): update clean-css-cli to version 4.0.7 (FineUploader#1753)

https://greenkeeper.io/

* chore(build): prepare for 5.14.0 release

* docs(amazon-s3.jmd): missing linefeeds in v4 signing steps

[skip ci]

* docs(amazon-s3.jmd): 2nd attempt at fixing nested list in v4 sig section

[skip ci]

* Add missing return definition

My env was complaining about implicit any.

* docs(navbar.html): prepare for switch to SSL
 [skip ci]

* docs(navbar.html): SSL not enabled yet for fineuploader.com
CloudFlare will redirect to HTTPS once it's ready anyway.
 [skip ci]

* docs(navbar.html): prepare for switch to SSL
 [skip ci]

* docs(async...jmd): typo in shitty home-grown promise impl docs
 [skip ci]

* chore(package): update karma to version 1.5.0 (FineUploader#1762)

https://greenkeeper.io/

* chore(build): generate docs to docs repo on Travis (FineUploader#1769)

This will eventually replace the dedicated Dreamhost server.

* chore(Makefile): ensure root of docs mirrors /branch/master
FineUploader#1770

* chore(Makefile): ensure root of docs mirrors /branch/master
FineUploader#1770

* chore(package): update clean-css-cli to version 4.0.8 (FineUploader#1771)

https://greenkeeper.io/

* chore(build): prepare for 5.14.1 release
FineUploader#1759

* chore(package): update karma-spec-reporter to version 0.0.27 (FineUploader#1773)

https://greenkeeper.io/

* chore(package): update karma-spec-reporter to version 0.0.29 (FineUploader#1774)

https://greenkeeper.io/

* chore(package): update karma-spec-reporter to version 0.0.30 (FineUploader#1775)

https://greenkeeper.io/

* feat(docs/navbar): easy access to docs for specific version

FineUploader#1770

* docs(main.css): tag-chooser mis-aligned on mobile

* chore(Makefile): use "released" version of FineUploader/docfu
FineUploader#1770

* chore(Makefile): use "released" version of FineUploader/docfu (1.0.2)
FineUploader#1770

* chore(package): update uglify-js to version 2.8.0 (FineUploader#1780)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.1 (FineUploader#1781)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.2 (FineUploader#1782)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.3 (FineUploader#1783)

https://greenkeeper.io/

* fix(uploader.basic.api.js): onStatusChange called too early

onStatusChange is called for initial/canned files before internal state for the file is completely updated. This change introduces an update that makes it easy for internal users of the upload-data service to defer the status change broadcast until all processing and state updates are complete.
fixes FineUploader#1802
fixes FineUploader/react-fine-uploader#91

* test(package): lock pica to v2.0.8 (FineUploader#1818)

Pica recently released v3.0.0, with a breaking change which our tests rely on. This
commit locks pica to the last stable version in order to make the test suite pass again.

* docs(LICENSE): clarify copyright history

[skip ci]

* docs(LICENSE): clarify copyright history

[skip ci]

* docs(README): + cdnjs badge

[skip ci]

* Update TypeScript definitions S3/Azure properties with default values to be optional (FineUploader#1830)

Change from required to optional for certain properties that FineUploader automatically provides defaults

* Added Open Collective to README.md and in npm postinstall (FineUploader#1795)

* Have to replace docs token after Travis' major security fuckup

* Work around Travis-CI key issue, sigh

* prepare for 5.14.3 release

* minor spelling/method name correction (FineUploader#1851)

* not using open collective anymore

[skip ci]

* not using open collective anymore

* prepare for 5.14.4 release

* Fix extraButtons typescript define not correct (FineUploader#1850)

fix in TS definitions where extraButtons option wasn't allowing to pass an array of ExtraButtonsOptions

* fix(azure): multi-part upload to S3 fails on Edge >= 15

fixes FineUploader#1852

* chore(build): prepare for 5.14.5 release
FineUploader#1852
FineUploader#1859

* docs(README.md): new maintainer notice

[skip ci]

* Updated Typescript to support canonical imports (FineUploader#1840)

* Updated Typescript to support canonical imports

* UI Options now extend core options

* constructor removed from interfaces that are function types, some parameters that provide default values changed to optional

Reverting some of the function type interface declarations to what they
were before to not use constructors. Some typo fixes

* Extra Buttons type fixed, code formatting

* Test file updated to demonstrate proper syntax usage

* TypeScript added to docs

Update to the documentation highlighting proper TypeScript usage
according to changes from this PR

* Adding abnormally missed text in previous commit

* Updated version number for next relaese (FineUploader#1891)

* no longer accepting support requests

* Fixes FineUploader#1930 documentation issue (FineUploader#1932)

* Fixes FineUploader#1930: fix documentation errors

Adds missing `document.` before `getElementById` calls.

* Adds missing `document.` to `getElementById` calls

* Remove Widen as sponsor

* feat(S3): Allow serverside encryption headers as request params (FineUploader#1933)

fixes FineUploader#1803

* fix(templating): correctly handle tables (FineUploader#1903)

When adding new rows to a table, the existing mechanism of storing the HTML of
a row in variable breaks at least on firefox: When parsing this HTML fragment
and creating DOM elements, the browser will ignore tags it does not expect
without proper parent tags. Appending this modified DOM branch to the table
results in broken DOM structure. Cloning the DOM branch of a row and appending
this clone to the table works just fine.

fixes FineUploader#1246

* prepare to release 5.15.1

* fix(Makefile): wrong case of `client/commonJs` (FineUploader#1828)

* fix(dnd): ignore non-file drag'n'drop events (FineUploader#1819)

Since we only deal with files, it makes sense to ignore all events non-file related (eg. dragging
plaintext). This commit fixes a few things that have changed in the browsers which subtly break
the current checks.

* The `contains` function on `dt.files` has been removed from the spec and will always return
  undefined. Except for IE, which hasn't implemented the change.
  * Chrome and Firefox have replaced it with `includes`, which we now use
  * We've left a `contains` check in there for IE as a last resort
  * Remove the comment about it being Firefox only, since it also works in Chrome now
  * More info re: removal at: https://github.com/tc39/Array.prototype.includes#status

* The dt.files property always seems to be an empty array for non-drop events. Empty arrays are
  truthy, and so this will always satisfy the `isValidFileDrag` check before it can validate that
  the types array includes files
  * It will now only be truthy if the files array actually contains entries

* There is a drop handler which binds to the document and always prevents all default drop
  behaviour from occurring, including things like dropping text into textfields
  * It will now only prevent default behaviour for file drops, which has the handy side-effect
    of preventing the page from navigating to the dropped file if the user misses the dropzone.

Fixes FineUploader#1588

* prepare for 5.15.2 release

* prepare for 5.15.3 release

* fix(dnd.js): Firefox drag area flickers (FineUploader#1946)

Removes Firefox check in leavingDocumentOut.
fixes FineUploader#1862

* prepare for 5.15.4 release

* bloburi param in doc now matches code (FineUploader#1950)

Minor edit to get the docs to match the code on the SAS request params.

* fix(templating.js): reset caused duplicate template contents (FineUploader#1958)

fixes FineUploader#1945

* prepare for 5.15.5 release

* fix(uploader.basic.api.js): auto-retry count not reset on success (FineUploader#1964)

fixes FineUploader#1172

* more maintainers

[skip ci]

* fix(dnd.js): qqpath wrong if file name occurs in parent dir (FineUploader#1977)

fixes FineUploader#1976

* feat(uploader.basic.js): more flexible server endpoint support (FineUploader#1939)

* Local dev/testing ports 3000/3001 clash with my local env, and possibly others - moving to 4000/4001.

* returned onUploadChunk promise can override method, params, headers, & url
* promissory onUpload callback

* always ensure test server are killed either on test start or stop

* don't try to kill test server on CI before tests start

* option to allow upload responses without { "success": true }

* allow default params to be omitted from upload requests

* don't fail upload w/ non-JSON response when requireSuccessJson = false

* more flexible chunking.success request support

* add .editorconfig (can't believe this didn't exist until now)

* Allow custom resume keys and data to be specified.

* include customResumeData in return value of getResumableFilesData API method

* add isResumable public API method

* introduce chunking.success.resetOnStatus to allow FU to reset a file based on chunking.success response code

* new API method: isResumable(id)

* Allow onUpload resolved Promise to pause the file.
Use case: When onUpload is called, you make a request to your server to see if the file already exists. If it does, you want to let your user decide if they want to overwrite the file, or cancel the upload entirely. While waiting for user input you don't want to hold a spot in the upload queue. If the user decided to overwrite the file, call the `continueUpload` API method.

* Allow per-file chunk sizes to be specified.
chunking.partSize now accepts a function, which passes the file ID and size

* feat(beforeUnload): new option to turn off beforeUnload alert during uploads

* feat(features.js): auto-detect folder support

* Allow access to Blob when file status is still SUBMITTING

* docs: options, API, and events doc updates

* added qq.status.UPLOAD_FINALIZING - don't cancel or pause in this state

closes FineUploader#848
closes FineUploader#1697
closes FineUploader#1755
closes FineUploader#1325
closes FineUploader#1647
closes FineUploader#1703

* fix(various): misc areas where null values may cause problems (FineUploader#1995)

* fix(upload.handler.controller): missing null-check in send()

* docs: fix Amazon S3 v4 signature guide (FineUploader#1998)

* docs: fix Amazon S3 v4 signature guide

* docs: s3 v4 signature

* docs(s3): fix chunked upload v4 (FineUploader#2001)

* Clarified that callbacks are FineUploader option. (FineUploader#2014)

* Call target.onload() only when defined (FineUploader#2056)

This removes the "Uncaught TypeError: target.onload is not a function" console error while img preview

* consider Firefox when checking for Android Stock Browser (FineUploader#1978) (FineUploader#2007)

* feat(dnd.js): add dragEnter and dragLeave callbacks (FineUploader#2037)

* feat(dnd.js): add dragEnter and dragLeave callbacks

* add dragEnter/dragLeave doc

* fix(Makefile): smart_strong import error on docfu install (FineUploader#2068)

Fixed in docfu 1.0.4, which locks us to python-markdown 2.6.11 (the last version to include smart_strong).
https://github.com/FineUploader/docfu/releases/tag/1.0.4

* fix logo

* not looking for new maintainers anymore

* bye bye fine uploader!
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants