-
-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unpack: accept fflag as an option #227
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only comment is whether we should make UV_FS_O_FILEMAP on by default. I'm fine with adding the fflag
option, but I'd prefer to make it fast by default, if we can.
const path = require('path') | ||
const file = process.argv[2] || path.resolve(__dirname, '../npm.tar') | ||
const fs = require('fs') | ||
const { O_CREAT, O_TRUNC, O_WRONLY, UV_FS_O_FILEMAP } = fs.constants |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit: this should provide a default value of 0
. undefined
will boolean-or to 0
anyway, but save the JS interpreter having to change the types around.
@isaacs updated, thanks for reviewing! UV_FS_O_FILEMAP should not be enabled by default in tar, because it is slower for large files (>1MB). It should however be enabled by default in npm where most files are small: npm/pacote#8 |
The file size is known in advance when extracting a tar entry, so it seems like we should default it for cases where we know it's going to be a benefit, no? |
Closing in favor of #230 |
This allows the WriteStream flags to be used when writing files to be specified as an option.
This allows programs using node-tar to take advantage of writing files using a file mapping, which can be significantly faster on Windows. Support landed in libuv in libuv/libuv#2295 and in Node.js in nodejs/node#29260.
For the included benchmarks, execution time drops from 6.6s to 2.8s for async and from 15.8s to 3.4s for sync on my local machine. This depends heavily on the machine being used: can range from negligible improvements to even better than the above.