Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi,
the motivation behind this PR has several elements:
With this in mind, since it was discussed to try and address the ENOBUFS issue in the netlink lib (https://github.com/google/nftables/pull/191/files/0d4369aacbd8b10bc86765a69851d0d01a821fd8#r982856106), I am introducing a PR for discussion which covers:
ReceiveBuffer
(ExecuteBuffer
) func which receives aBufferAllocationFunc
that allocates buffers for the underlying socket, passed by the user (covering issue netlink: add a version of Conn.Receive that doesn't allocate its own buffers #178)BufferAllocationFunc
(or default allocation strategy) which is similar to the previous peek-loop-allocate but in case ENOBUFS happens, it automatically resizes the socket read and write buffersReadBuffer
andWriteBuffer
methods viabufferGetter
interface for easier calculation of buffer size by user applicationsIn the end, I am not sure if this is the best approach since applications could still catch the ENOBUFS error themselves, resize the read and write buffers with
SetReadBuffer
orSetWriteBuffer
and then resend the message making this PR unnecessary? We can change the PR as per your feedback.Let me know what you think.