-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Operate on blob parts (byte sequence) #44
Conversation
Codecov Report
@@ Coverage Diff @@
## master #44 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 1 1
Lines 62 70 +8
Branches 12 20 +8
=========================================
+ Hits 62 70 +8
Continue to review full report at Codecov.
|
Also included some of #43 jsDoc, and version change |
i guess this is more like how browser handles blob parts |
Wouldn't be adding |
I think a little test case, checking the example from #40 should be added: var b1 = new Blob([new Uint8Array(1000)])
var b2 = b1.slice(500) // uses the same parts with an offset=500
// b2 should not take up 500 byte more ram... Maybe using |
i did a manual test on that. it will either use used a 20mb Uint8Array, sliced from start to end and made sure it still used 20mb |
So, what we are fixing here if memory usage is the same? The way data is reading? |
Yup, data is read in another way now. |
This comment has been minimized.
This comment has been minimized.
I want to rather follow the spec. Also now when you slice a blob you are going to use the same parts but filtering out some part that are not needed and it will also set a offset (on the first and last part) even doe it may not look like i'm doing it (code-wise). |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think JSDoc should be completed to integrated types building out of it later.
As I understand from node-fetch/node-fetch#835 merging this will make |
yea, i bet https://github.com/octet-stream/form-data would want to use this also. or if developer uses it together with form-data One wishful thinking would be to add class File extends Blob {
constructor(blobParts, fileName, options = {}) {
const { lastModified = Date.now(), ...blobPropertyBag } = options;
super(blobParts, blobPropertyBag);
this.name = String(fileName).replace(/\u002F/g, "\u003A");
this.lastModified = lastModified;
}
} The rest is handled by blob. I actually found a readme from chrome explaining how blobs are handled https://chromium.googlesource.com/chromium/src/+/master/storage/browser/blob/README.md
But this is a bit out of this scope and could be for another issue. |
This PR makes fetch-blob extendable to work with any 3th party blobs by still using them as the source
I was successful at using this http-blob WebIDL like reader and add it into fetch-blob using the following code
No binary have been read, size still reflect correctly (even after using slice)
the http request isn't being made until i actually call
blob.arrayBuffer()
, text or streamthis will also make it possible to later add in File entries that is backed up by the file system later - you won't have to add anything into the memory and it will still be able to slice and read chunks
Doing something like this now won't end up resulting in 4 gib ram being used
you will still be able to read both file as one single blob
closes #40