Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

txpool: buffer so that we dont delete txs #2500

Closed
wants to merge 4 commits into from
Closed

Conversation

emailtovamos
Copy link
Contributor

@emailtovamos emailtovamos commented May 30, 2024

Description

To ensure we don't have to let go of valid authentic transactions just because the txpool is full, here's a potential mechanism of keeping the transactions in memory which would have been otherwise deleted. Then time to time the txpool can be checked if it is truly full. If it is not then these buffered transactions can be included.

Rationale

This is to ensure we keep as many transactions as possible without significantly affecting networking.

Example

add an example CLI or API response...

Changes

Notable changes:

  • add each change in a bullet point here
  • ...

@emailtovamos emailtovamos requested a review from galaio June 3, 2024 12:04
@galaio
Copy link
Contributor

galaio commented Jun 4, 2024

Hi, why not increase txpool slot size to buffer more txs? what is the difference between them?

@emailtovamos
Copy link
Contributor Author

@galaio that would mean in the entire network the number is increased which will come with extra load for every one. In the current PR case the more performant nodes will ensure the number of dropped transactions is less while still ensuring the total overall networking load isn't as much as the first case but rather delayed and distributed across time.

@brilliant-lx did you have any other justification behind this?

// Add dropped transactions to the buffer
for _, dropTx := range drop {
log.Trace("Buffering discarded transaction", "hash", dropTx.Hash(), "gasTipCap", dropTx.GasTipCap(), "gasFeeCap", dropTx.GasFeeCap())
underpricedTxMeter.Mark(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this metric shouldn't be triggered

if len(pool.buffer) < maxBufferSize {
pool.buffer = append(pool.buffer, dropTx)
} else {
log.Warn("Buffer is full, discarding transaction", "hash", hash)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why you even continuing with the loop if it's full?

Comment on lines 827 to 840
// Add dropped transactions to the buffer
for _, dropTx := range drop {
log.Trace("Buffering discarded transaction", "hash", dropTx.Hash(), "gasTipCap", dropTx.GasTipCap(), "gasFeeCap", dropTx.GasFeeCap())
underpricedTxMeter.Mark(1)

pool.bufferLock.Lock()
if len(pool.buffer) < maxBufferSize {
pool.buffer = append(pool.buffer, dropTx)
} else {
log.Warn("Buffer is full, discarding transaction", "hash", hash)
}
pool.bufferLock.Unlock()
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Add dropped transactions to the buffer
for _, dropTx := range drop {
log.Trace("Buffering discarded transaction", "hash", dropTx.Hash(), "gasTipCap", dropTx.GasTipCap(), "gasFeeCap", dropTx.GasFeeCap())
underpricedTxMeter.Mark(1)
pool.bufferLock.Lock()
if len(pool.buffer) < maxBufferSize {
pool.buffer = append(pool.buffer, dropTx)
} else {
log.Warn("Buffer is full, discarding transaction", "hash", hash)
}
pool.bufferLock.Unlock()
}
// Add dropped transactions to the buffer
pool.bufferLock.Lock()
availableSpace := maxBufferSize - len(pool.buffer)
// Determine how many elements to take from drop
if availableSpace > len(drop) {
availableSpace = len(drop)
}
pool.buffer = pool.buffer = append(pool.buffer, drop[:availableSpace]...)
pool.bufferLock.Unlock()

}

var readded []*types.Transaction
for _, tx := range pool.buffer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you do not lock buffermutex here and you should

@brilliant-lx
Copy link
Collaborator

@galaio that would mean in the entire network the number is increased which will come with extra load for every one. In the current PR case the more performant nodes will ensure the number of dropped transactions is less while still ensuring the total overall networking load isn't as much as the first case but rather delayed and distributed across time.

@brilliant-lx did you have any other justification behind this?

I was wondering if we can just increase the current TxPool slot size along with some tx broadcast mechanism change, like:

  • rate limit control: stop broadcast txs to all the peers once broadcastTPS > thresholdTPS(3000?)
  • once rate limit reached: only broadcast txs a few nodes(3 nodes by default?), or introduce a new concept like "SuperPeer". BSC's TxPool topology could be different from Ethereum, BSC can have "NormalPeer" and "SuperPeer", SuperPeer could have a giant TxPool size.

And for this PR, I think it is simple and can help avoid TX from been disappeared from the network, but I wonder if we can have other better solutions.

@zzzckck zzzckck closed this Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants