-
Notifications
You must be signed in to change notification settings - Fork 549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use llhttp wasm build #575
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You only need to implement RESPONSE
for client.
The goal for this activity should be to verify that we suffer minimal slowdowns from using wasm instead of native. |
We also probably want to reduce the number of calls across the js <-> wasm border so we might want to modify the c level code as well. Let's just get it working and passing tests first and then we optimize. |
I agree! |
Codecov Report
@@ Coverage Diff @@
## master #575 +/- ##
==========================================
- Coverage 99.36% 97.99% -1.37%
==========================================
Files 16 17 +1
Lines 1414 1547 +133
==========================================
+ Hits 1405 1516 +111
- Misses 9 31 +22
Continue to review full report at Codecov.
|
If you have time could you try |
The |
Rebased against #571. I'll fix tests iteratively next. |
I was able to run a benchmark with the current changes, even if not complete: Node version:
> [email protected] bench /home/dnlup/Dev/dnlup/undici
> npx concurrently -k -s first "node benchmarks/server.js" "node -e 'setTimeout(() => {}, 1000)' && node benchmarks"
[1] http - no agent x 402 ops/sec ±3.49% (61 runs sampled)
[1] http - keepalive x 453 ops/sec ±2.94% (59 runs sampled)
[1] http - keepalive - multiple sockets x 3,191 ops/sec ±4.19% (72 runs sampled)
[1] undici - pipeline x 3,702 ops/sec ±3.84% (69 runs sampled)
[1] undici - request x 3,572 ops/sec ±3.56% (66 runs sampled)
[1] undici - pool - request - multiple sockets x 3,527 ops/sec ±3.79% (65 runs sampled)
[1] undici - stream x 3,885 ops/sec ±3.37% (71 runs sampled)
[1] undici - dispatch x 3,842 ops/sec ±4.04% (65 runs sampled)
[1] undici - noop x 4,524 ops/sec ±1.35% (67 runs sampled)
> [email protected] bench /home/dnlup/Dev/dnlup/undici
> npx concurrently -k -s first "node benchmarks/server.js" "node -e 'setTimeout(() => {}, 1000)' && node benchmarks"
[1] http - no agent x 420 ops/sec ±4.12% (50 runs sampled)
[1] http - keepalive x 441 ops/sec ±2.88% (52 runs sampled)
[1] http - keepalive - multiple sockets x 3,400 ops/sec ±2.81% (78 runs sampled)
[1] undici - pipeline x 3,813 ops/sec ±2.96% (72 runs sampled)
[1] undici - request x 3,578 ops/sec ±2.83% (67 runs sampled)
[1] undici - pool - request - multiple sockets x 3,830 ops/sec ±3.83% (69 runs sampled)
[1] undici - stream x 3,773 ops/sec ±3.81% (65 runs sampled)
[1] undici - dispatch x 3,976 ops/sec ±3.07% (71 runs sampled)
[1] undici - noop x 4,302 ops/sec ±2.46% (58 runs sampled) Not sure my results are lower than average on both branches, but I thought it would have been worse. |
This is incredibly positive. Good work @dnlup! |
The only test remaining is https://github.com/nodejs/undici/blob/master/test/client-pipeline.js#L975 . The buffers differ by 2 bytes, it looks like the first 2 from the |
Could maybe be a string encoding thing? |
It makes sense, how could I test it? |
I'll dig into it. There's a |
I don't think we need to fix this since we always destroy the socket and parser on error anyway. I just think we should have an assertion or error as to not make mistakes in the future. |
const { execSync } = require('child_process') | ||
const TMP = require('os').tmpdir() | ||
|
||
const REPO = '[email protected]:dnlup/llhttp.git' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer if we used the upstream repository instead and we copy over whatever files we need.
Alternatively, I'd be ok in doing that adds this to llhttp itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer if we used the upstream repository instead and we copy over whatever files we need.
Expanding on https://github.com/nodejs/undici/pull/575/files#r589005221 (and thinking out loud), we need to build the c
files prior to build the wasm binary (and the source files are the same llhttp
is using to build the parser), I think moving that out of llhttp
would require us to duplicate a lot of build logic in here. Like, we would need the c
src files, the js
files to build the parser, add the build scripts.
Alternatively, I'd be ok in doing that adds this to llhttp itself.
I'll open a PR to llhttp
and see what happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, here's the PR:
This implement a minimal parser that uses llhttp built with wasm. A custom repository was setup to have a custom wasm build. Benchmarks of the current changes: http - no agent x 436 ops/sec ±4.06% (54 runs sampled) http - keepalive x 453 ops/sec ±2.72% (54 runs sampled) http - keepalive - multiple sockets x 3,191 ops/sec ±2.30% (73 runs sampled) undici - pipeline x 4,062 ops/sec ±3.14% (73 runs sampled) undici - request x 3,925 ops/sec ±2.81% (74 runs sampled) undici - pool - request - multiple sockets x 4,045 ops/sec ±3.36% (76 runs sampled) undici - stream x 3,912 ops/sec ±2.69% (74 runs sampled) undici - dispatch x 4,371 ops/sec ±1.96% (79 runs sampled) undici - noop x 4,237 ops/sec ±2.58% (59 runs sampled) Benchmarks of the stable branch: http - no agent x 410 ops/sec ±4.18% (63 runs sampled) http - keepalive x 446 ops/sec ±2.75% (53 runs sampled) http - keepalive - multiple sockets x 3,248 ops/sec ±2.97% (73 runs sampled) undici - pipeline x 3,666 ops/sec ±3.18% (73 runs sampled) undici - request x 3,951 ops/sec ±3.04% (72 runs sampled) undici - pool - request - multiple sockets x 4,017 ops/sec ±3.16% (75 runs sampled) undici - stream x 4,014 ops/sec ±3.89% (70 runs sampled) undici - dispatch x 3,823 ops/sec ±3.46% (67 runs sampled) undici - noop x 4,569 ops/sec ±1.99% (59 runs sampled)
For what is worth, I think it's already handled inside https://github.com/nodejs/llhttp/blob/master/src/native/api.h#L81 That should already cause a request to fail because execute would return an error. |
Was reading nodejs/node#37676 (comment) and got curious whether our wasm build is strict or loose? |
I suspect it is strict. I don't see much difference between the node build and the wasm build. How should Node disable |
We would need to commit the |
I wouldn't mind that. |
I'm ok in vendoring the full of the llhttp source and apply patches on top/put the automation in our own tree. |
@mcollina can we merge this once conflicts are resolved? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
This can land if CI pass |
I think CI is fine, it's stuck for some reason. I am waiting on #601 to remove the |
Apologies, the comment regards #611 . I think we should merge that PR. |
I'm confused. Merge into this PR? Or what? |
Merge into |
So close this PR? |
Yes, you're right. I guess I was waiting to see if this was still doable. |
In this PR I'll try to:
This is still messy, I'll get there eventually.Final tasks:
implement headersTimeout optionFixes #22