Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: Removing
is_dev_net
flag (#8275)
This PR tries to get rid of the `is_dev_net` flag that we had in the `constants.nr`. This is to simplify the flows such that is just one flow used by both the devnet and spartan. Alters our `createNode` and `setup` such that they will always assume active validators, since it will be required for proper sequencing. Changes the `l1-publisher` slightly to reduce the probability that `validateBlockForSubmission` would fail due to an Ethereum block arriving between the check and the `propose` call. Alters the `sequencer::work()` function such that there is a cleaner split between when we can return early, and when we have to throw an error and revert to properly rollback the global state. Alters the `collectAttestations` functions slightly, i) to cover the case where a validator client is not provided but a committee is needed, and ii) to ensure that the sequencers own attestation also makes it way into the attestations collected. --- # Graveyard Below this point is the graveyard where old issues very talked about and insanity ensued. --- Currently running into issues where tests are behaving "strange". Namely, it seems like we sometimes will have a passing test and sometimes wont. This especially is encountered when many tests are run at once, such as the `e2e_token_contract` tests. A snippet below shares some frustration. If we are running all the tests, they always fail, but if running only some, it seems to depend on what is being logged... ```bash DEBUG="aztec:*,-aztec:avm_simulator:*" LOG_LEVEL="silent" yarn test e2e_token_contract/transfer_private // passes LOG_LEVEL="silent" yarn test e2e_token_contract/transfer_private // fails LOG_LEVEL="DEBUG" yarn test e2e_token_contract/transfer_private // fails ``` Somewhat interesting, if using `AZTEC_SLOT_DURATION = 36` many of these issues seems to go away, e.g., transactions are not dropped anymore etc. However, this really should not be the case, hence this time influence is not walltime, as it is using an anvil instance behind the scenes. Main reason around this is more likely to be that we don't encounter the case where a `submitProof` have progressed time and moved the slot. --- Looking at logs! What do I see in the logs - Transaction TX_A is dropped. - When looking higher, I can see that TX_A is dropped because of duplicate nullifiers in the state trees! Something interesting! While the nullifier tree from sync is size 1088, the one we match against is 1216 and the index of first collisions is OUTSIDE of the tree that you get from synching :skull: - Looking JUST above where we drop these transactions i see that we are encountering an error while perfoming `validateHeader` (the slot have changed because of `submitProof`) - Looking slightly above this, we can see what I believe is the sequencer simulating the base rollups (all good here!) - Moving slightly up, we can see that the sequencer is processing the transaction itself. - Further up, we see the user creating the transaction - And above that we see the last block, lets call it BLOCK_A Note from this: There is no block after BLOCK_A where TX_A could have been included, but when the sequencer is FAILING to publish the new block, it seems to be keeping the state but dropping the block AND its transactions. So the setup fails because the user will get the response from the node that "this is a double-spend, go away". I tried using `PROVER_NODE_DISABLE_AUTOMATIC_PROVING` to turn of the proving, but that don't seem to have any effect, and if I try to just bypass the submitProofs it seems to cause the application to infinite loop where it just never tries anything ever again. The tree that the sequencer is checking against that is larger than what you get from sync seem to only be larger for a "short" time before it figures out something is messed up, and will "rollback", but the damage is done and we just dropped potentially a lot of transactions There exact timing of the failure also "depends", so that is kinda pain. ![image](https://github.com/user-attachments/assets/2fad7185-fb32-4ffd-a825-e9c55263c8e3) **Update**: Issue seemed to be that if we returned early from `work()` in the sequencer, when not proposing a block. We would not rollback the state, so if state changes were made, it would WRECK the next block.
- Loading branch information