Skip to content

Codebase

Alberto Sonnino edited this page Mar 23, 2021 · 3 revisions

The mempool is fairly separated from consensus. The mempool receives clients’ transactions, batches them into payloads (of configurable size) and broadcasts them to other nodes; we may call these messages 'mempool batches'. Their purpose is to disseminate client's transactions to the other nodes before consensus attempts to sequence them (thus allowing a more egalitarian bandwidth utilization amongst nodes). They also reduce client-perceived latency as a node does not necessarily need to wait to become leader to propose the transactions of their clients: other nodes may include their transactions after receiving them as part of mempool batches.

Consensus is then reached on the hashes of these mempool batches. Whenever consensus needs to create a block proposal, it queries its local mempool for the hashes of one or many payloads to include in its block. Its query specifies a maximum number of bytes it wishes to receive (this is also configurable) so to ensure an upper bound on the size of its proposals. The mempool replies to consensus with as many hashes of payloads as possible (while respecting the size constraint in the query), but may also reply with an empty vector in case there are no payloads available. This may happen if you configure the batch size of mempool too high. Also, if you set the consensus' payload size too low, the mempool buffers will fill up with hashes of payloads waiting to be sequenced and eventually starts dropping client's transactions (to avoid running out of memory). The size of this buffer is configured through the mempool parameters.

Clone this wiki locally