-
Notifications
You must be signed in to change notification settings - Fork 695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New sequence sender component #2051
Changes from all commits
3107287
9b94903
d445da6
7dcd595
edd7bab
ca40499
cfe9cb1
664281c
65cc249
6d088e5
1a966cc
9c2734e
307d5af
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -65,8 +65,7 @@ TrustedSequencerURL = "" | |
|
||
[Sequencer] | ||
WaitPeriodPoolIsEmpty = "1s" | ||
WaitPeriodSendSequence = "5s" | ||
LastBatchVirtualizationTimeMaxWaitPeriod = "5s" | ||
|
||
BlocksAmountForTxsToBeDeleted = 100 | ||
FrequencyToCheckTxsForDelete = "12h" | ||
MaxTxsPerBatch = 150 | ||
|
@@ -90,7 +89,6 @@ WeightBinaries = 1 | |
WeightSteps = 1 | ||
TxLifetimeCheckTimeout = "10m" | ||
MaxTxLifetime = "3h" | ||
MaxTxSizeForL1 = 131072 | ||
[Sequencer.Finalizer] | ||
GERDeadlineTimeoutInSec = "5s" | ||
ForcedBatchDeadlineTimeoutInSec = "60s" | ||
|
@@ -101,15 +99,21 @@ MaxTxSizeForL1 = 131072 | |
ClosingSignalsManagerWaitForCheckingGER = "10s" | ||
ClosingSignalsManagerWaitForCheckingForcedBatches = "10s" | ||
ForcedBatchesFinalityNumberOfBlocks = 64 | ||
TimestampResolution = "15s" | ||
SenderAddress = "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266" | ||
TimestampResolution = "15s" | ||
PrivateKeys = [{Path = "/pk/sequencer.keystore", Password = "testonly"}] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shouldn't this line be removed? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, but maybe this is outside the scope of this PR. We should open a new issue in order to handle that. |
||
[Sequencer.DBManager] | ||
PoolRetrievalInterval = "500ms" | ||
L2ReorgRetrievalInterval = "5s" | ||
[Sequencer.Worker] | ||
ResourceCostMultiplier = 1000 | ||
|
||
[SequenceSender] | ||
WaitPeriodSendSequence = "5s" | ||
LastBatchVirtualizationTimeMaxWaitPeriod = "5s" | ||
MaxTxSizeForL1 = 131072 | ||
SenderAddress = "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266" | ||
PrivateKeys = [{Path = "/pk/sequencer.keystore", Password = "testonly"}] | ||
|
||
[PriceGetter] | ||
Type = "default" | ||
DefaultPrice = "2000" | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
package sequencesender | ||
|
||
import ( | ||
"github.com/0xPolygonHermez/zkevm-node/config/types" | ||
) | ||
|
||
// Config represents the configuration of a sequence sender | ||
type Config struct { | ||
// WaitPeriodSendSequence is the time the sequencer waits until | ||
// trying to send a sequence to L1 | ||
WaitPeriodSendSequence types.Duration `mapstructure:"WaitPeriodSendSequence"` | ||
// LastBatchVirtualizationTimeMaxWaitPeriod is time since sequences should be sent | ||
LastBatchVirtualizationTimeMaxWaitPeriod types.Duration `mapstructure:"LastBatchVirtualizationTimeMaxWaitPeriod"` | ||
// MaxTxSizeForL1 is the maximum size a single transaction can have. This field has | ||
// non-trivial consequences: larger transactions than 128KB are significantly harder and | ||
// more expensive to propagate; larger transactions also take more resources | ||
// to validate whether they fit into the pool or not. | ||
MaxTxSizeForL1 uint64 `mapstructure:"MaxTxSizeForL1"` | ||
// SenderAddress defines which private key the eth tx manager needs to use | ||
// to sign the L1 txs | ||
SenderAddress string `mapstructure:"SenderAddress"` | ||
// PrivateKeys defines all the key store files that are going | ||
// to be read in order to provide the private keys to sign the L1 txs | ||
PrivateKeys []types.KeystoreFileConfig `mapstructure:"PrivateKeys"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change would force all the components running at the same binary to concur the db access. Multiple components will run different go routines but will consume a single pool Instance.
it's ok to initialize multiple pool instances for different components and the time it takes do instantiate shouldn't be a problem, I don't see a reason for this optimization and I'm afraid of the side effects it can cause when talking about concurrent and parallel executions to the pool.
What is the rational behind this change? Have you faced an issue during the development?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just thought it was better to just have one pool per binary. I can undo this is you don't feel confortable, or we are not sure if it may cause problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Single pool, maybe better performance, maybe side cases.
Multiple pools, work as always, and already tests multiple times.
It's up to you, my suggestion is that we shouldn't touch on a team that's winning unless you want more emotion in your life 🤣