-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base64 decoder could be 2x faster when decoding wrapped base64 #12114
Comments
can you please post the js code sample to test this.. I could probably take a look at it. |
var crypto = require('crypto');
var data = crypto.randomBytes(32*1024*1024);
var base64 = '';
var now = 0;
var times = 3;
while (times--) {
now = Date.now();
base64 = data.toString('base64');
console.log('Encode: ' + (Date.now() - now) + 'ms');
}
var times = 3;
while (times--) {
now = Date.now();
Buffer.from(base64, 'base64');
console.log('Decode Fast: ' + (Date.now() - now) + 'ms');
}
function wrap(string) {
var lines = [];
var index = 0;
var length = string.length;
while (index < length) {
lines.push(string.slice(index, index += 76));
}
return lines.join('\r\n');
}
base64 = wrap(base64);
var times = 3;
while (times--) {
now = Date.now();
Buffer.from(base64, 'base64');
console.log('Decode Slow: ' + (Date.now() - now) + 'ms');
} |
@jorangreef tried to do that, but the performance increase is only 10–11% for me: #12146. |
The fast base64 decoder used to switch to the slow one permanently when it saw a whitespace or other garbage character. Since the most common situation such characters may be encountered in is line-wrapped base64 data, a more profitable strategy is to decode a single 24-bit group with the slow decoder and then continue running the fast algorithm. Refs: nodejs#12114
@aqrln I just finished https://github.com/ronomon/base64 which is an alternative C++ buffer-to-buffer encoder/decoder (you can decode a buffer containing base64 without allocating an interim string). If you take a look at the C++ binding source, it handles the slow case without switching permanently to slow mode. I also included a simplistic
|
The fast base64 decoder used to switch to the slow one permanently when it saw a whitespace or other garbage character. Since the most common situation such characters may be encountered in is line-wrapped base64 data, a more profitable strategy is to decode a single 24-bit group with the slow decoder and then continue running the fast algorithm. PR-URL: #12146 Ref: #12114 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Trevor Norris <[email protected]> Reviewed-By: James M Snell <[email protected]>
Thanks @aqrln, just wanted to check that you were able to increase the performance by close to 100% in the end? |
@jorangreef nope, I haven't worked on this since that PR. If you'd like to optimize it more to achieve the performance of your userland library, it would be really great. If not, then I could maybe take a look at it later. |
Thanks @aqrln, I hope you can run with it and get it all the way there - your c++ is probably better than mine. You are welcome to copy the decoder verbatim from my source if not. |
The fast base64 decoder used to switch to the slow one permanently when it saw a whitespace or other garbage character. Since the most common situation such characters may be encountered in is line-wrapped base64 data, a more profitable strategy is to decode a single 24-bit group with the slow decoder and then continue running the fast algorithm. PR-URL: nodejs#12146 Ref: nodejs#12114 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Trevor Norris <[email protected]> Reviewed-By: James M Snell <[email protected]>
Here's the latest on this with node Generated by https://github.com/ronomon/base64/blob/master/fast-slow.js:
The Base64 reference is at https://github.com/ronomon/base64. |
Seeing there's been no real movement on this in over a year, I'll go ahead and close this out. Pull requests still welcome, of course. |
Node's base64 decoder currently uses a fast decoder and a slow decoder.
The fast decoder decodes 32-bit words at a time. If it sees a line-break or whitespace or garbage, then it switches permanently to the slow decoder which decodes a byte at a time, with a conditional branch per byte, instead of per 32-bit word.
I did some rough benchmarking to compare decoding a 4mb random buffer encoded as base64, and decoding the same base64 but with CRLFs added every 76 chars as per MIME base64:
As far as I can see, there's no reason to switch permanently to the slow decoder. If the fast decoder detects that the 32-bit word contains an invalid character, it could just decode the next few bytes byte-by-byte, and then switch back to fast mode as soon as it has consumed 4 valid base64 characters and outputted 3 bytes. This could all be a sub-branch after the 32-bit word check so it should not affect the performance of the fast decoder in any way.
For base64 decoding MIME data, this should nearly double the throughput since the slow case is triggered only every 76 bytes.
The text was updated successfully, but these errors were encountered: