This repository has been archived by the owner on Jul 21, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 30
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Messages are serialized to multiple buffers, intsead of yield each buffer one by one, create single buffers that contain the whole serialized message. This greatly improves transport performance as writing one big buffer is a lot faster than writing lots of small buffers to network sockets etc. Before: ``` testing 0.40.x-mplex sender 3276811 messages 17 invocations sender 6553636 bufs 17 b 24197 ms 105 MB in 32 B chunks in 24170ms ``` After: ``` testing 0.40.x-mplex sender 3276811 messages 1638408 invocations 1638411 bufs 68 b 8626 ms 105 MB in 32 B chunks in 8611ms ```
This was referenced Nov 23, 2022
wemeetagain
reviewed
Nov 24, 2022
src/encode.ts
Outdated
@@ -17,7 +18,7 @@ class Encoder { | |||
/** | |||
* Encodes the given message and returns it and its header |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment needs updating
achingbrain
added a commit
to libp2p/js-libp2p
that referenced
this pull request
Nov 24, 2022
Instead of using `it-pipe` to tie the inputs and outputs of the muxer and underlying connection together, pipe them in parallel. When sending 105MB in 32b chunks: ## `[email protected]` ``` testing 0.36.x sender 3276810 messages 1638409 invocations <-- how many mplex messages are sent in how many batches sender 1638412 bufs 68 b <-- how many buffers are passed to the tcp socket and their average size 105 MB in 32 B chunks in 9238ms ``` ## `[email protected]` ``` testing 0.40.x-mplex sender 3276811 messages 32 invocations sender 6553636 bufs 17 b 27476 ms 105 MB in 32 B chunks in 27450ms ``` ## With this patch ``` testing 0.40.x-mplex sender 3276811 messages 17 invocations sender 6553636 bufs 17 b 23781 ms 105 MB in 32 B chunks in 23753ms ``` ## With this patch and libp2p/js-libp2p-mplex#233 ``` testing 0.40.x sender 3276811 messages 1638408 invocations 1638411 bufs 68 b 105 MB in 32 B chunks in 8611ms ``` Refs #1342
mpetrunic
approved these changes
Nov 24, 2022
github-actions bot
pushed a commit
that referenced
this pull request
Nov 24, 2022
## [7.0.5](v7.0.4...v7.0.5) (2022-11-24) ### Bug Fixes * apply message size limit before decoding message ([#231](#231)) ([279ad47](279ad47)) * limit unprocessed message queue size separately to message size ([#234](#234)) ([2297856](2297856)) * yield single buffers ([#233](#233)) ([31d3938](31d3938))
🎉 This PR is included in version 7.0.5 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Messages are serialized to multiple buffers, instead of yielding each buffer one by one, create single buffers that contain the whole serialized message. This greatly improves transport performance as writing one big buffer is a lot faster than writing lots of small buffers to network sockets etc.
This is similar to what the old CJS version did, except it yielded
BufferList
s which were then.slice()
ed elsewhere to turn them into a singleBuffer
.Before:
After:
With this patch and libp2p/js-libp2p#1491
Refs libp2p/js-libp2p#1342