Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update stage_finish.go : notifications to rpc daemon #2755

Merged
merged 8 commits into from
Oct 4, 2021

Conversation

AndreaLanfranchi
Copy link
Contributor

Only notify for canonical header obtained from last Finish.

Imho this should be enough. But review !!

Only notify for canonical header obtained from last Finish
@AskAlexSharov
Copy link
Collaborator

It's incompatible with docs: https://geth.ethereum.org/docs/rpc/pubsub

@AndreaLanfranchi AndreaLanfranchi marked this pull request as draft October 1, 2021 09:30
@AndreaLanfranchi
Copy link
Contributor Author

AndreaLanfranchi commented Oct 1, 2021

It's incompatible with docs: https://geth.ethereum.org/docs/rpc/pubsub

I'm not arguing but willing to understand. From the specs

notifications are sent for current events and not for past events. If your use case requires you not to miss any notifications than subscriptions are probably not the best option.

Due to Erigon's stage structure is inevitable, particularly on stage cycle 0, all notifications of headers before the very last (the one recorded in Finish) are "past" events.

notifications are stored in an internal buffer and sent from this buffer to the client. If the client is unable to keep up and the number of buffered notifications reaches a limit (currently 10k) the connection is closed. Keep in mind that subscribing to some events can cause a flood of notifications, e.g. listening for all logs/blocks when the node starts to synchronize

During first 2 or 3 cycles of a sync from scratch we might well send out more than 10k notifications.

Maybe we can send out notifications for all processed headers only if notifyTo - notifyFrom < 1024 ? (i.e. the threshold for single transaction cycles ?

Waiting for your comments

@AskAlexSharov
Copy link
Collaborator

“send out notifications for all processed headers only if notifyTo - notifyFrom < 1024” i like this logic.

- early exit if notification channel is not set
- change wording
@AndreaLanfranchi
Copy link
Contributor Author

AndreaLanfranchi commented Oct 2, 2021

@AskAlexSharov question ... do we really need to pass NotifyHeaders this unwindTo *uint64 ?
I mean ... an unwind should have already persisted the Finish value after the cycle (before notification) so basically
stages.GetStageProgress(tx, stages.Finish) is the new finishStageAfterSync and we simply need to compute delta amongst
finishStageBeforeSync and finishStageAfterSync
Am I missing something ?

@AndreaLanfranchi
Copy link
Contributor Author

Also ... context.Context is not used in the function.
(sorry to bother but in C++ we got warnings or errors for unused formal parameters)

@AskAlexSharov
Copy link
Collaborator

AskAlexSharov commented Oct 2, 2021

  • Agree, this method was inside of stage_finish in the past. I have an idea:
    something like
		tx.ForEach(kv.Headers, bigEndian(notifyFrom), func(k, headerRLP []byte) error {
			notifier.OnNewHeader(headerRLP)
			return nil
		})

It will do zero-copy broadcast (zero marshal/unmarshal) from db to network of canonical and non-canonical headers from notifyFrom to the end of kv.Headers table in DB.
Then we don't need unwindTo parameter, don't need get stageFinish progress, etc...
It require change signature of ChainEventNotifier.OnNewHeader but it's easy because only one consumer of this API.

  • context is used to stop on Ctrl+C - need add it to loop:
if err := libcommon.Stopped(ctx.Done()); err != nil {
			return libcommon.ErrStopped
		}

@AndreaLanfranchi
Copy link
Contributor Author

AndreaLanfranchi commented Oct 2, 2021

of canonical and non-canonical headers

Not sure I understand this. Doesn't RPC Daemon need to be notified of highest canonical header(s) only ?

@AskAlexSharov
Copy link
Collaborator

Docs says https://geth.ethereum.org/docs/rpc/pubsub:

newHeads

Fires a notification each time a new header is appended to the chain, including chain reorganizations. Users can use the bloom filter to determine if the block contains logs that are interested to them.

In case of a chain reorganization the subscription will emit all new headers for the new chain. Therefore the subscription can emit multiple headers on the same height.

@AskAlexSharov
Copy link
Collaborator

I still think having 1024 as limit does make sense.

@AndreaLanfranchi
Copy link
Contributor Author

I still think having 1024 as limit does make sense.

1024 or 8096 like canRunCycleInOneTransaction here ? If the latter we can pass the boolean and avoid redoing the math

@AskAlexSharov
Copy link
Collaborator

I'm ok with re-using canRunCycleInOneTransaction

@AndreaLanfranchi
Copy link
Contributor Author

I did my best (my GO skills are at "Hello World" level)

  • All headers are notified (canonical or not) within a max span of 1024
  • NotifyHeaders now receives the stage "Finish" height after the cycle (which is already computed in StageLoopStep)
  • I wish I could print the number of the highest header notified but apparently I am not able to set a capture variable in the walk function of ForEach

I did not dare to change the signature of OnNewHeader as I see there are multiple dependencies which might get broken

Please take a look ... and don't be frightened to be rude :D

@AndreaLanfranchi AndreaLanfranchi changed the title Update stage_finish.go with 1 notification Update stage_finish.go : notifications to rpc daemon Oct 2, 2021
@AskAlexSharov
Copy link
Collaborator

Thank you!

  • About capture variable: declare variable before foreach by var lastKey []byte and inside walk func assign to this variable lastKey = k. Then after ForEach use lastKey variable.
  • "I did not dare to change the signature of OnNewHeader as I see there are multiple dependencies which might get broken" - no, only one. can do it later.

@AskAlexSharov
Copy link
Collaborator

Another way to print is subscribe to RPCDaemon as a client. For example:

wscat -c ws://localhost:8545
{"id": 1, "method": "eth_subscribe", "params": ["newHeads"]}

rpcdaemon must have --ws flag to enable websocker

@AndreaLanfranchi AndreaLanfranchi marked this pull request as ready for review October 3, 2021 10:04
@AskAlexSharov AskAlexSharov merged commit f70dd63 into devel Oct 4, 2021
@AskAlexSharov AskAlexSharov deleted the al-20210930-patch-2 branch October 4, 2021 00:30
mandrigin added a commit that referenced this pull request Oct 7, 2021
* begin 2021.10.2 release cycle

* Revert "rpcdaemon: (#2752)" (#2762)

This reverts commit 2afd028.

* Pool v2: --txpool.pricelimit support (#2763)

* --txpoo.pricelimit support

* Pool v2: --txpool.accountslots flag support (#2765)

* Update signal_windows.go (#2767)

Trap os.interrupt instead of SIGINT and SIGTERM

* Update stage_finish.go : notifications to rpc daemon (#2755)

* Dockerfile: switch to go1.17 and alpine3.14 (#2766)

* add logs in recoverFromDb func (#2769)

* eip 1559 in miner (#2773)

* Inner errors (#2774)

* Clean up DEBUG category logs (#2776)

- move many DEBUG logs into TRACE category

* Decoding incarnation implemented (#2764)

* WIP decoding incarnation specifically

* Changed decodeIncarnation to be an external function

* added tests to for decoding incarnations

* ran gofmt -w -s

* changed test name, and changed incarnations to 4

* Created a test which tests if it returns an error when there is one

* ran gofmt

* Capitalized all tests and made breaking test

* added an error check

* changed decodingForStorage for decodingIncarnationFromStorage

* ran gofmt -w -s

* No senders is fine (#2775)

* IntermediateHash stage - switch from incremental to re-generate mode - if jump > 100K blocks (#2781)

* Enable "State stream" by default (#2780)

* No json rpc streaming (#2779)

* reduce_bach_concurrency_default

* RPCDaemon: reduce --rpc.batch.concurrency default from 50 to 2 (#2784)

* Integration to print right stage in logs (#2785)

* remove debug prints

* RemoteDB: don't spend time to close cursors on end of tx - server will cleanup everything well (#2786)

* Fermion genesis block (#2787)

* updated Fermion genesis block

* Updated Fermion genesis block: added precompiles

* Rpcdaemon: add  --tevm flag to enable experiment (#2788)

* Reworkings of state compression experiments (#2790)

* Changes

* Progress

* Another way

* More

* More

* Produce encoding

* Add uncoded characters

* cleanup

* Add sortdict

* Fixes

* Use patricia from erigon-lib

* Cleanup

* Switch to dynamic programming, optimise allocations in FindMatches

* Optimise allocations

* Reduce allocations

* Switch to main branch of erigon-lib, reduce allocations further

Co-authored-by: Alexey Sharp <[email protected]>
Co-authored-by: Alex Sharp <[email protected]>

* Ignore MaxPeers param for staticpeers (#2789)

Co-authored-by: Aleksandr Borodulin <[email protected]>

* less warnings

* Update skip_analysis.go (#2792)

* Extend preverified hashes for mainnet and ropsten (#2793)

Co-authored-by: Alexey Sharp <[email protected]>

Co-authored-by: Alex Sharov <[email protected]>
Co-authored-by: Andrea Lanfranchi <[email protected]>
Co-authored-by: Enrique Jose  Avila Asapche <[email protected]>
Co-authored-by: e-danko <[email protected]>
Co-authored-by: ledgerwatch <[email protected]>
Co-authored-by: Alexey Sharp <[email protected]>
Co-authored-by: Alex Sharp <[email protected]>
Co-authored-by: Alexandr Borodulin <[email protected]>
Co-authored-by: Aleksandr Borodulin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants